CIS Computing & Information Services

Getting Started

This guide assumes you have an Oscar username and password. To request an account see create an account


Oscar is the CCV cluster. It has two login nodes and several hundred compute nodes. The login nodes are shared between all users of the system. Running computationally intensive or memory intensive programs on the login node slows down the system for all users. Any processes taking up too much CPU or memory on a login node will be killed.

You can use the login nodes to compile your code, manage files, and launch jobs on the compute nodes. Do not run Matlab on the login nodes.

Oscar runs the Linux operating system. General Linux documentation is available from The Linux Documentation Project. We recommend you read up on basic Linux commands before using Oscar. CCV users can mount the Oscar filesystem on their own Windows, Mac, or Linux system using CIFS.

Connecting to Oscar for the first time

To log in to Oscar you need Secure Shell (SSH) on your computer. Mac and Linux machines normally have SSH available, Windows users need to install an SSH client. We recommend PuTTY, a free SSH client for Windows. To login in to Oscar, open a terminal and type

ssh <username>

where is your CCV account user name. The first time you connect to Oscar you will see a message like:

The authenticity of host ' (' can't be established.
RSA key fingerprint is SHA256:Nt***************vL3cH7A.
Are you sure you want to continue connecting (yes/no)? 

You can type yes . You will be prompted for your password (the one you received via text message when you set up the CCV account). Note nothing will show up on the screen when you type in your password, just type it in and press enter. You will now be in your home directory on Oscar. In your terminal you will see something like this:

mycomputer:~ mhamilton$ ssh's password: 
Last login: Thu Nov  3 15:41:04 2016 from
Welcome to Oscar! This login node is shared among many users: please be
courteous and DO NOT RUN large-memory or compute-intensive programs here!
In particular, do not run MATLAB jobs here. They will be automatically killed.
Instead, submit a batch job or start an interactive session with 'interact'.

For help using this system, please search our documentation at  (or contact '')

The per-user limit for the Oscar queue is currently:
 192 cores for premium accounts
  16 cores for exploratory accounts

module: loading 'centos-libs/6.5'
module: loading 'centos-updates/6.3'
module: loading 'intel/2013.1.106'
module: loading 'java/7u5'
module: loading 'python/2.7.3'
module: loading 'perl/5.18.2'
module: loading 'matlab/R2014a'
[mhamilton@login001 ~]$ 

Congratulations, you are now on one of the Oscar login nodes.

Note: Please do not run computations or simulations on the login nodes, because they are shared with other users. You can use the login nodes to compile your code, manage files, and launch jobs on the compute nodes.


The password texted to you when the account is created is for both your Oscar login (SSH) and CIFS .
You can use the commands below to change your passwords. Note: When you change one of your passwords, the other password is not affected.

To change your Oscar login password, use the command:

$ passwd

You will be asked to enter your old password, then your new password twice.

To change your CIFS password, use the command:

$ smbpasswd

Note if you ask for a password reset from CCV, both the SSH and CIFS password will be reset.

File system

Users on Oscar have three places to store files.

  • home
  • scratch
  • data

Note guest and class accounts may not have a data directory. Users who are members of more than one research group may have access to multiple data directories.

To see how much space you have you can use the command myquota. Below is an example output

                   Block Limits                              |           File Limits              
Type    Filesystem           Used    Quota   HLIMIT    Grace |    Files    Quota   HLIMIT    Grace
USR     home               8.401G      10G      20G        - |    61832   524288  1048576        -
USR     scratch              332G     512G      12T        - |    14523   323539  4194304        -
FILESET data+apollo        11.05T      20T      24T        - |   459764  4194304  8388608        -

A good practice is to configure your application to read any initial input data from ~/data and write all output into ~/scratch. Then, when the application has finished, move or copy data you would like to save from ~/scratch to ~/data. For more information on which directories are backed up and best practices for reading/writing files, see Managing Files. You can go over your quota up to the hard limit for a grace period (14days). This grace period is to give you time to manage your files. When the grace period expires you will be unable to write any files until you are back under quota.

Software modules

CCV uses the PyModules package for managing the software environment on OSCAR. To see the software available on Oscar use the command module avail. The command module list shows what modules you have loaded. Below is an example of checking which versions of the module 'workshop' are available and loading a given version.

[mhamilton@login001 ~]$ module avail workshop
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ name: workshop*/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
workshop/1.0  workshop/2.0  
[mhamilton@login001 ~]$ module load workshop/2.0
module: loading 'workshop/2.0'
[mhamilton@login001 ~]$ 

For a list of all PyModule commands see Software Modules. If you have a request for software to be installed on Oscar, email

Using a Desktop on Oscar

You can connect remotely to a graphical desktop environment on Oscar using CCV's VNC client. The CCV VNC client integrates with the scheduling system on Oscar to create dedicated, persistent VNC sessions that are tied to a single user.

Using VNC, you can run applications and scripting languages like Matlab, Mathematica, Maple, python, perl, or the R statistical package (shown on the left) on our high-performance and large-memory systems. You also have fast access to CCV's local high performance file system so that you can access and analyze large data files without having to copy them to your own system.

For download and installation instructions click here

Running Jobs

You are on Oscar's login nodes when you log in through SSH. You should not (and would not want to) run your programs on these nodes as these are shared by all active users to perform tasks like managing files and compiling programs.

With so many active users, an HPC cluster has to use a software called a "job scheduler" to assign compute resources to users for running programs on the compute nodes. When you submit a job (a set of commands) to the scheduler along with the resources you need, it puts your job in a queue. The job is run when the required resources (cores, memory, etc.) become available. Note that as Oscar is a shared resource, you must be prepared to wait for your job to start running and it can't be expected to start running straight away.

Oscar uses the SLURM job scheduler. Batch jobs is the preferred mode of running jobs, where all commands are mentioned in a "batch script" along with the required resources (number of cores, wall-time, etc.). However, there is also a way to run programs interactively. The more resources you request, the longer your wait time in the queue.

For all the information on how to submit jobs on Oscar, see Running Jobs. There is also extensive documentation on the web on using SLURM. For instance, here is a quick start guide.

Where to get help