User Tools
The VACC support team has developed several tools, accessible throughout the VACC clusters, which assist in tracking accounting and Slurm job statistics.
my_help¶
Running the command my_help will print out a list of user tools, with
a brief summary of their function.
[testuser@vacc-login1 ~]$ my_help
Command Description
------- -----------
my_help displays short summary of available "my" commands
For detailed help with "my" commands, use the "-h" option
For example: my_job_statistics -h
would display detailed help for the my_job_statistics command.
my_accounts lists cluster resource accounts
my_compute_usage print compute hours and calculated compute units for a given
account with the ability to specify the desired timeframe
my_gpfs_quota print the group quota (storage and files) for your primary group,
displays both group usage and your individual usage
my_job_header include in job scripts to capture environment variables
my_job_statistics shows detailed job information and resource efficiency
my_accounts¶
my_accounts shows which Slurm account(s) your VACC account can submit jobs as. You can also append a username to query another user's Slurm accounts. If you are in multiple groups, the bottom account listed is your "primary" account - the one used if no account is specified when you start a job.
[testuser@vacc-login1 ~]$ my_accounts
Account
--------------------
test-secondary-slurm-account-0
test-secondary-slurm-account-1
testaccount
[testuser@vacc-login1 ~]$ my_accounts otheruser
Account
--------------------
test-secondary-slurm-account-1
otheruseraccount
my_compute_usage¶
my_compute_usage reports the CPU hour, GPU hour, and Compute Unit usage of a VACC account. This report defaults to the past 1 year, but can be set to choose any timeframe. It prints the total usage of the account, as well as the statistics for each sponsored user.
| Command Line Argument | Description |
|---|---|
| -s, --starttime | Specifies the start time of the report. The valid date formats are mm/dd/yy, mm/dd/yyyy, or yyyy-mm-dd e.g. 01/01/24 - 01/01/2024 - 2024-01-01 - By default, the start of the report is 365 days ago. |
| -e, --endtime | Specifies the start time of the report. The valid date formats are mm/dd/yy, mm/dd/yyyy, or yyyy-mm-dd e.g. 01/01/24 - 01/01/2024 - 2024-01-01 - By default, the end of the report is the current date. |
| -y, --year | The calendar or fiscal year chosen for the report. This option is mutually exclusive with manually specifying a start or end time. Acceptable formats for fiscal year are FYyy or FYyyyy, e.g. FY2023 or FY23. Acceptable formats for the calendar year are yyyy or yy, e.g. 2023 or 23. If the current year or fiscal year is chosen, this displays the results for the year-to-date. |
| -a, --account | The Slurm account queried for the report. By default, this is the PI group of the the user that runs this command. |
| -c, --csv | Using this flag outputs a CSV file, rather than a readable table. This program prints to STDOUT, so if saving to a file is necessary, pipe the output to a file. E.G: my_compute_usage --csv >> results.csv |
| --cpu | The amount of Compute Units per CPU hour. As described in the VACC's Cost/Payment article, the default is 1 Compute Unit per CPU hour. --cpu 10 would increase the calculated rate to 10 Compute Units per CPU hour. |
| --gpu | The amount of Compute Units per GPU hour. As described in the link above, the default is 60 Compute Unit per GPU hour. --gpu 10 would decrease the calculated rate to 10 Compute Units per GPU hour. |
[testuser@vacc-login1 ~]$ my_compute_usage
Getting results from the Slurm job database.
This may take a moment...
CPU, GPU, and CU usage for the Slurm account testaccount
for the period: 2023-05-09 to 2024-05-08
---------------------------Usage per User----------------------------------
Account | Username | CPU Hours used | GPU Hours used | CUs used
testaccount | testuser | 10.00 | 5.50 | 340.00
testaccount | testuser2 | 15.00 | 0.00 | 15.00
----------------------Usage total for Account------------------------------
Account | | CPU Hours used | GPU Hours used | CUs used
testaccount | | 25.00 | 5.50 | 355.00
my_gpfs_quota¶
my_gpfs_quota is used to view usage and quota information for both your pi group and your personal usage.
[testuser@vacc-login1 ~]$ my_gpfs_quota
Group quota for your primary group: pi-testuser
Space limits
------------------------------------------------------------------------------
Filesystem type blocks quota limit in_doubt grace
gpfs1 GRP 20.51G 2T 4T 0 none
gpfs2 GRP 12.96G 4T 8T 0 none
gpfs3 GRP 0 0 0 0 none
------------------------------------------------------------------------------
File Limits
------------------------------------------------------------------------------
Filesystem type files quota limit in_doubt grace Remarks
gpfs1 GRP 86978 1048576 1536000 0 none
gpfs2 GRP 44383 1536000 3145728 0 none
gpfs3 GRP 1 0 0 0 none
------------------------------------------------------------------------------
SPACE occupied by testuser within the pi-testuser group
------------------------------------------------------------------------------
Filesystem blocks
gpfs1 20.22G
gpfs2 12.96G
gpfs3 0
------------------------------------------------------------------------------
FILES created by testuser within the pi-testuser group
------------------------------------------------------------------------------
Filesystem files
gpfs1 80526
gpfs2 44381
gpfs3 1
------------------------------------------------------------------------------
NOTE: Quotas are based on your group, so the figures in the first block are for
your group. Your personal usage is in the second block.
my_job_header¶
my_job_header, when added to a job script, will report information about
the job to the OUT file which can be useful for debugging and optimizing
your job.
Job information
#-------------------------------------------------------------------
SLURM_SUBMIT_HOST vacc-login4.cluster
SLURM_JOB_ACCOUNT pi-testgroup
SLURM_JOB_PARTITION short
SLURM_JOB_NAME eCOG_multi_cpu
SLURM_JOBID 3628483
SLURM_NODELIST node413
SLURM_JOB_NUM_NODES 1
SLURM_NTASKS 6
SLURM_TASKS_PER_NODE 6
SLURM_CPUS_PER_TASK 1
SLURM_NPROCS 6
SLURM_MEM_PER_NODE 10240 M
SLURM_SUBMIT_DIR testdir
scheduling priority (-e) 0
pending signals (-i) 6188475
max memory size (kbytes, -m) 10485760
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
max user processes (-u) 6188475
Running on node413.cluster at Fri Apr 3 04:20:53 PM EDT 2026
Your job output begins below the line
#-------------------------------------------------------------------
my_job_statistics¶
my_job_statistics JobID is used to see detailed job information and resource
efficiency, helpful for optimizing your resource requests on similar jobs in
the future. The command seff JobID is also available and provides similar
information on completed jobs.
[testuser@vacc-login1 ~]$ my_job_statistics 3628483
Job summary for JobID 3628483 for testuser using the pi-testgroup account
Job name: my_job_name
--------------------------------------------------------------------------
Job submit time: 04/03/2026 16:20:52
Job start time: 04/03/2026 16:20:53
Job end time: 04/03/2026 16:25:12
Job running time: 00:04:19
Job walltime effic.: 43.17% (00:04:19 elapsed out of 00:10:00 requested)
State: COMPLETED
Exit code: 0
On nodes: node413
(1 node: 6 cores per node)
CPU Utilized: 00:00:22
CPU Efficiency: 1.42% of 00:25:54 total CPU time (cores * walltime)
Memory Utilized: 1.21 GB
Memory Efficiency: 12.10% of 10.00 GB
--------------------------------------------------------------------------
[testuser@vacc-login1 ~]$ seff 3628483
Job ID: 3628483
Cluster: vacc2
User/Group: testuser/pi-testgroup
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 6
CPU Utilized: 00:00:22
CPU Efficiency: 1.42% of 00:25:54 core-walltime
Job Wall-clock time: 00:04:19
Memory Utilized: 1.21 GB
Memory Efficiency: 12.10% of 10.00 GB (10.00 GB/node)