CIS Computing & Information Services

Frequently Asked Questions


How do I request help? Most inquiries can be directed to CCV’s support address,, which will create a support ticket with one of our staff.

What are the fees for CCV services? All CCV services are billed quarterly, and rates can be found here (requires a Brown authentication to view). Questions about rates should be directed to

How do I acknowledge CCV in a research publication? We greatly appreciate acknowledgements in research publications that benefited from the use of CCV services or resources. Please see the guidelines for doing so.


What is Oscar? Oscar is our primary research computing cluster with several hundred multi-core nodes sharing a high-performance interconnect and file system. Applications can be run interactively or scheduled as batch jobs. For more information, see our technologies section.

How do I request an account on Oscar? To request an account, please fill out a New User Account Form. All accounts are subject to our General Terms and Conditions.

How do I run a job on Oscar? Sample batch scripts are available in your home directory at ~/batch_scripts and can be run with the sbatch <jobscript> command. For more information, visit our manual page on Batch Jobs.

Can I use Oscar for teaching? See our page on Academic Classes

How do I find out when the system is down? We post updates to our user mailing list,, which you are automatically subscribed to when setting up an account with CCV. If you need to be added to the mailing list, please submit a support ticket to

How do I run a job array on Oscar? A job array is a special type of job submission that allows you to submit many related batch jobs with a single command. This makes it easy to do parameter sweeps or other schemes where the submitted jobs are all the same except for a single parameter such as a filename or input variable. Job arrays require special syntax in your job script. Sample batch scripts for job arrays are available in your home directory at ~/batch_scripts and can be run with the sbatch <jobscript> command. For more information, visit our manual page on Running Jobs.

How do I run a MPI job on Oscar? MPI is a type of programming interface. Programs written with MPI can run on and communicate across multiple nodes. You can run MPI-capable programs by calling mpirun -n # <program> in your batch script, where -n is the number of processors you have requested in your job. For more detailed info, visit our manual page on MPI programs.

I have some MPI-enabled source code. How can I compile it on Oscar? The MPI implmentation mvapich2 is installed and fully supported on the cluster. By default it supports compiling with gcc. If you want to use a different compiler, use a module such as mvapich2-intel, mvapich2-pgi, etc. To compile your MPI-enabled application, load the mvapich2 module and change your compiler to the appropriate wrapper beginning with "mpi-" (this could be mpicc, mpic++, mpif77, etc, depending on your language). Call the wrapper wherever you would normally call your compiler. For example: mpicc program.c An alternate MPI implementation, OpenMPI is installed, and it is used similar to the above. However, it is not fully supported due to hardware reasons.

What applications are available on Oscar? Many scientific and HPC software packages are already installed on Oscar, including python, perl, R, Matlab, Mathematica, and Maple. Use the module avail command on Oscar to view the whole list or search for packages. See our manual page on Software to understand how software modules work. Additional packages can be requested by submitting a support ticket to

What compilers are available on Oscar? By default, the gcc compiler is available when you login to Oscar, providing the GNU compiler suite of gcc (C), g++ (C++), and gfortran (Fortran 77/90/95). We also provide compilers from Intel (intel module) and the Portland Group (pgi module). For more information, visit our manual page on Software.

How do I get information about finished jobs? The sacct command will list all of your completed jobs since midnight of the previous day (as well as running and queued jobs). You can pick an earlier start date with the -S option, e.g. sacct -S 2012-01-01.

How much storage am I using? The myquota command on Oscar will print a summary of your usage on the home, data, and scratch file systems. For more information, see our manual page on File Systems.

My job keeps terminating unexpectedly with a "Killed" message, or without any errors. What happened? These are symptoms of not requesting enough memory for your job. The default memory allocation is about 3 GB. If your job is resource-intensive, you may need to specifically allocate more. See the user manual for instructions on requesting memory and other resources.

How do I request a certain amount of memory per CPU? Specify the SLURM option --mem-per-cpu= in your script.

How do I link against a BLAS and LAPACK library? We recommend linking against the Intel Math Kernels Library (MKL) which provides both BLAS and LAPACK. The easiest way to do this on Oscar is to include the special environment variable $MKL at the end of your link line, e.g. gcc -o blas-app blas-app.c $MKL. For more complicated build systems, you may want to consult the MKL Link Line Advisor.

Why is my job taking so long to start? 1. overall system busy: when tens of thousands of jobs are submitted it total by all users, the time it takes SLURM to process these into the system may increase from the normal almost instantly to a half-hour or more.
2. specific resource busy: if you request very specific resources (e.g., a specific processor) you then have to wait for that specific resource to become available while other similar resources may be going unused.
3. specified resource not available: if you request something that is not or may never be available, your job will simply wait in the queue. E.g., requesting 64 GB of RAM on a 64 GB node will never run because the system needs at least 1 GB for itself so you should reduce your request to less than 64.


How do I transfer big files to/from Oscar? Please use the server
1 . From your machine to Oscar:

scp file1.txt
From Oscar to another machine:
ssh transfer
transfer> scp file1.txt userid@your-machine-name
2. Alternatively, Oscar has an endpoint for "Globusonline" ( that you can use to more effectively transfer files. See our manual page on how to use Globus Online to transfer files.


What is XSEDE? XSEDE is an NSF-funded, nationwide collection of supercomputing systems that are available to researchers through merit-based allocations. It replaces what used to be called the TeraGrid. CCV participates in the Campus Champions program and has a small pool of compute time available for Brown researchers to test out XSEDE resources.

How do I request access to XSEDE? If you would like help getting started with XSEDE and obtaining a startup allocation, please contact with a description of your project and which XSEDE systems you would like to access. We can also assist you in preparing an allocation request for longer-term access to larger resources.

How do I access XSEDE resources? See

Cloud HPC Options

The use of cloud resources for HPC varies according to your demands and circumstances. Cloud options are changing rapidly both in service providers and various services being offered. For those who have short-term needs that don't demand the highest of computational performance, a cloud option might be appropriate. For others, a local option customized to individual needs may be better. The cost of cloud services also varies quite a bit and includes not only compute time but data transfer charges. Other issues involved licensing, file synchronization, etc.

We are actively investigating a number of options to connect Brown users seamlessly to suitable cloud options. We are collecting such information for publishing on the CIS website as part of research services available. At this point, the best course of action is to request an individual consultation to help address your specific needs. Please send email to support@ccv.