Important

New Docs are available at https://docs.aws.amazon.com/parallelcluster

All new features, starting with 2.4.0, will be documented there.

awsbsub

Submits jobs to the cluster’s Job Queue.

usage: awsbsub [-h] [-jn JOB_NAME] [-c CLUSTER] [-cf] [-w WORKING_DIR]
               [-pw PARENT_WORKING_DIR] [-if INPUT_FILE] [-p VCPUS]
               [-m MEMORY] [-e ENV] [-eb ENV_BLACKLIST] [-r RETRY_ATTEMPTS]
               [-t TIMEOUT] [-n NODES] [-a ARRAY_SIZE] [-d DEPENDS_ON]
               [command] [arguments [arguments ...]]

Positional Arguments

command

The command to submit (it must be available on the compute instances) or the file name to be transferred (see –command-file option).

Default: <_io.TextIOWrapper name=’<stdin>’ mode=’r’ encoding=’UTF-8’>

arguments Arguments for the command or the command-file (optional).

Named Arguments

-jn, --job-name
 The name of the job. The first character must be alphanumeric, and up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed
-c, --cluster Cluster to use
-cf, --command-file
 

Identifies that the command is a file to be transferred to the compute instances

Default: False

-w, --working-dir
 The folder to use as job working directory. If not specified the job will be executed in the job-<AWS_BATCH_JOB_ID> subfolder of the user’s home
-pw, --parent-working-dir
 Parent folder for the job working directory. If not specified it is the user’s home. A subfolder named job-<AWS_BATCH_JOB_ID> will be created in it. Alternative to the –working-dir parameter
-if, --input-file
 File to be transferred to the compute instances, in the job working directory. It can be expressed multiple times
-p, --vcpus

The number of vCPUs to reserve for the container. When used in conjunction with –nodes it identifies the number of vCPUs per node. Default is 1

Default: 1

-m, --memory

The hard limit (in MiB) of memory to present to the job. If your job attempts to exceed the memory specified here, the job is killed. Default is 128

Default: 128

-e, --env Comma separated list of environment variable names to export to the Job environment. Use ‘all’ to export all the environment variables, except the ones listed to the –env-blacklist parameter and variables starting with PCLUSTER_* and AWS_* prefix.
-eb, --env-blacklist
 Comma separated list of environment variable names to NOT export to the Job environment. Default: HOME, PWD, USER, PATH, LD_LIBRARY_PATH, TERM, TERMCAP.
-r, --retry-attempts
 

The number of times to move a job to the RUNNABLE status. You may specify between 1 and 10 attempts. If the value of attempts is greater than one, the job is retried if it fails until it has moved to RUNNABLE that many times. Default value is 1

Default: 1

-t, --timeout The time duration in seconds (measured from the job attempt’s startedAt timestamp) after which AWS Batch terminates your jobs if they have not finished. It must be at least 60 seconds
-n, --nodes The number of nodes to reserve for the job. It enables Multi-Node Parallel submission
-a, --array-size
 The size of the array. It can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job
-d, --depends-on
 A semicolon separated list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a SEQUENTIAL type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an N_TO_N type dependency with a job ID for array jobs so that each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin. Syntax: jobId=<string>,type=<string>;…