CRC provides a few helper scripts that are intended to make the user experience simpler. These include

Checking disk quota with crc-quota

[username@login1 ~]$ crc-quota
User: 'username'
-> ihome: 70.11 GB / 75.0 GB

Group: 'groupname'
-> bgfs: 35.91 GB / 5.0 TB

View your group's SLURM allocation details with crc-usage. Using this command alone will show the details for your user account's default allocation.

[username@login1 ~] : crc-usage
+------------------------------------------------------------------------+
|                    MyUserGroup Proposal Information                    |
+----------------------------+--------------------------+----------------+
|     Proposal End Date:     |        05/04/2024        |                |
+----------------------------+--------------------------+----------------+
|        Proposal ID:        |           897            |                |
+----------------------------+--------------------------+----------------+
|                            |                          |                |
+----------------------------+--------------------------+----------------+
|        Cluster: SMP        |     Total SUs: 35000     |                |
+----------------------------+--------------------------+----------------+
|           User1            |           128            |      N/A       |
|           User2            |           5000           |       14       |
+----------------------------+--------------------------+----------------+
|      Overall for SMP       |           5128           |       0        |
+----------------------------+--------------------------+----------------+
|                            |                          |                |
+----------------------------+--------------------------+----------------+
|        Cluster: GPU        |     Total SUs: 25000     |                |
+----------------------------+--------------------------+----------------+
|                            |                          |                |
+----------------------------+--------------------------+----------------+
|        Cluster: HTC        |     Total SUs: 25000     |                |
+----------------------------+--------------------------+----------------+
|            User            |         SUs Used         |     % Used     |
+----------------------------+--------------------------+----------------+
|           User1            |           128            |      N/A       |
+----------------------------+--------------------------+----------------+
|      Overall for HTC       |           128            |       0        |
+----------------------------+--------------------------+----------------+
|                            |                          |                |
+----------------------------+--------------------------+----------------+
|        Floating SUs        |      SUs Remaining       |     % Used     |
+----------------------------+--------------------------+----------------+
|       *Floating SUs        |                          |                |
|       are applied on       |                          |                |
|       any cluster to       |            0*            |       0        |
|     cover usage above      |                          |                |
|         Total SUs          |                          |                |
+----------------------------+--------------------------+----------------+
|      Aggregate Usage       |            0             |                |
+----------------------------+--------------------------+----------------+

Look for available resources across the clusters with crc-idle.

[username@login1 ~]$  crc-idle
Cluster: smp, Partition: smp
============================
  2 nodes w/   1 idle cores
  5 nodes w/   2 idle cores
  1 nodes w/   3 idle cores
  9 nodes w/   4 idle cores
  2 nodes w/   5 idle cores
 11 nodes w/   6 idle cores
 30 nodes w/   7 idle cores
 35 nodes w/   8 idle cores
  1 nodes w/   9 idle cores
 11 nodes w/  12 idle cores
  4 nodes w/  15 idle cores
  1 nodes w/  16 idle cores
  1 nodes w/  18 idle cores
  1 nodes w/  21 idle cores
  1 nodes w/  22 idle cores
 20 nodes w/  23 idle cores
Cluster: smp, Partition: high-mem
=================================
  6 nodes w/   8 idle cores
  2 nodes w/  12 idle cores
Cluster: smp, Partition: legacy
===============================
  1 nodes w/   1 idle cores
  1 nodes w/   8 idle cores
Cluster: gpu, Partition: gtx1080
================================
  3 nodes w/   1 idle GPUs
  1 nodes w/   2 idle GPUs
  4 nodes w/   3 idle GPUs
  4 nodes w/   4 idle GPUs
Cluster: gpu, Partition: titanx
===============================
  1 nodes w/   1 idle GPUs
  1 nodes w/   2 idle GPUs
  1 nodes w/   3 idle GPUs
  3 nodes w/   4 idle GPUs
Cluster: gpu, Partition: k40
============================
  1 nodes w/   2 idle GPUs
Cluster: gpu, Partition: v100
=============================
 No idle GPUs
Cluster: mpi, Partition: opa
============================
 No idle cores
Cluster: mpi, Partition: opa-high-mem
=====================================
 No idle cores
Cluster: mpi, Partition: ib
===========================
 14 nodes w/  20 idle cores
Cluster: htc, Partition: htc
============================
  2 nodes w/   2 idle cores
  1 nodes w/   5 idle cores
  1 nodes w/   6 idle cores
  1 nodes w/  10 idle cores
  3 nodes w/  11 idle cores
  1 nodes w/  12 idle cores
 20 nodes w/  16 idle cores
  4 nodes w/  24 idle cores
  1 nodes w/  25 idle cores
  1 nodes w/  37 idle cores
  5 nodes w/  48 idle cores

You can request an interactive session on a compute-node with crc-interactive

[username@login1 ~]$ crc-interactive --help
 crc-interactive -- An interactive Slurm helper
Usage:
    crc-interactive (-s | -g | -m | -i | -d) [-hvzo] [-t <time>] [-n <num-nodes>]
        [-p <partition>] [-c <num-cores>] [-u <num-gpus>] [-r <res-name>]
        [-b <memory>] [-a <account>] [-l <license>] [-f <feature>]

Positional Arguments:
    -s --smp                        Interactive job on smp cluster
    -g --gpu                        Interactive job on gpu cluster
    -m --mpi                        Interactive job on mpi cluster
    -i --invest                     Interactive job on invest cluster
    -d --htc                        Interactive job on htc cluster
Options:
    -h --help                       Print this screen and exit
    -v --version                    Print the version of crc-interactive
    -t --time <time>                Run time in hours, 1 <= time <= 12 [default: 1]
    -n --num-nodes <num-nodes>      Number of nodes [default: 1]
    -p --partition <partition>      Specify non-default partition
    -c --num-cores <num-cores>      Number of cores per node [default: 1]
    -u --num-gpus <num-gpus>        Used with -g only, number of GPUs [default: 0]
    -r --reservation <res-name>     Specify a reservation name
    -b --mem <memory>               Memory in GB
    -a --account <account>          Specify a non-default account
    -l --license <license>          Specify a license
    -f --feature <feature>          Specify a feature, e.g. `ti` for GPUs
    -z --print-command              Simply print the command to be run
    -o --openmp                     Run using OpenMP style submission

[username@login1 ~]$ crc-interactive -g -p titanx -n 1 -c 1 -u 1 -t 12
srun: job 260065 queued and waiting for resources
srun: job 260065 has been allocated resources

[username@gpu-stage06 ~]$ nvidia-smi
Wed Jan 26 08:42:04 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX TIT...  On   | 00000000:02:00.0 Off |                  N/A |
| 48%   82C    P2   236W / 250W |    794MiB / 12212MiB |     99%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX TIT...  On   | 00000000:03:00.0 Off |                  N/A |
| 22%   28C    P8    16W / 250W |      1MiB / 12212MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX TIT...  On   | 00000000:81:00.0 Off |                  N/A |
| 22%   28C    P8    15W / 250W |      1MiB / 12212MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  GeForce GTX TIT...  On   | 00000000:82:00.0 Off |                  N/A |
| 22%   27C    P8    14W / 250W |      1MiB / 12212MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A     23796      C   pmemd.cuda                        790MiB |
+-----------------------------------------------------------------------------+

[username@gpu-stage06 ~]$ exit
exit

[username@login1 ~]$

There are a few other helper scripts that you can see by typing crc- followed two strokes of the tab key

[username@login1 ~]$ crc-
crc-idle          crc-job-stats     crc-quota         crc-squeue        crc-usage
crc-interactive   crc-proposal-end  crc-scancel       crc-sinfo         crc-sus
[username@login1 ~]$ crc-

Job Details / Management

crc-job-stats - Include at the bottom of a job script, outputs the details of the job after completion

crc-squeue - Show details for active jobs submitted by your user

crc-scancel - Cancel a job by job ID from crc-squeue

Allocation Details

crc-proposal-end - Outputs the end date of your user's default allocation

crc-sus - Show a concise view of the service units awarded on each cluster

Cluster / Partition Details

crc-show-config - Show configuration information about the partitions for each cluster

crc-sinfo - Show the status of the partitions of each cluster and their nodes

The best way to get help on a specific issue is to submit a https://crc.pitt.edu/submit-ticket. You should log in to the CRC website using your Pitt credentials first.