salloc(1) - Obtain a SLURM job allocation (a set of nodes), execute a command, and then release the allocation when the command is finished
-A, --account=<account>
       Charge  resources  used by this job to specified account.  The account is an arbitrary string. The
       account name may be changed after job submission using the scontrol command.
--acctg-freq=<seconds>
       Define  the  job  accounting  sampling   interval.    This   can   be   used   to   override   the
       JobAcctGatherFrequency  parameter  in  SLURM's  configuration  file,  slurm.conf.  A value of zero
       disables real  the  periodic  job  sampling  and  provides  accounting  information  only  on  job
       termination (reducing SLURM interference with the job).
-B --extra-node-info=<sockets[:cores[:threads]]>
       Request a specific allocation of resources with details as to the number and type of computational
       resources within a cluster: number of sockets (or physical processors) per node, cores per socket,
       and  threads per core.  The total amount of resources being requested is the product of all of the
       terms.  Each value specified is  considered  a  minimum.   An  asterisk  (*)  can  be  used  as  a
       placeholder  indicating  that  all  available  resources of that type are to be utilized.  As with
       nodes, the individual levels can also be specified in separate options if desired:
           --sockets-per-node=<sockets>
           --cores-per-socket=<cores>
           --threads-per-core=<threads>
       If task/affinity plugin is enabled, then specifying an allocation  in  this  manner  also  sets  a
       default  --cpu_bind  option  of  threads  if  the -B option specifies a thread count, otherwise an
       option of cores if a core count is specified, otherwise an option of sockets.   If  SelectType  is
       configured  to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket, or
       CR_Socket_Memory for this option to be honored.  This option is not supported on BlueGene  systems
       (select/bluegene  plugin  is  configured).   If  not specified, the scontrol show job will display
       'ReqS:C:T=*:*:*'.
--begin=<time>
       Submit the batch script to the SLURM controller immediately, like normal, but tell the  controller
       to defer the allocation of the job until the specified time.

       Time  may  be  of the form HH:MM:SS to run a job at a specific time of day (seconds are optional).
       (If that time is already past, the next day is assumed.)  You may also specify midnight, noon,  or
       teatime  (4pm) and you can have a time-of-day suffixed with AM or PM for running in the morning or
       the evening.  You can also say what day the job will be run, by specifying  a  date  of  the  form
       MMDDYY   or   MM/DD/YY   YYYY-MM-DD.   Combine   date   and   time   using  the  following  format
       YYYY-MM-DD[THH:MM[:SS]]. You can also give times like now + count time-units, where the time-units
       can  be  seconds  (default),  minutes, hours, days, or weeks and you can tell SLURM to run the job
       today with the keyword today and to run the job tomorrow with the keyword tomorrow.  The value may
       be changed after job submission using the scontrol command.  For example:
          --begin=16:00
          --begin=now+1hour
          --begin=now+60           (seconds by default)
          --begin=2010-01-20T12:34:00

       Notes on date/time specifications:
        -  Although  the  'seconds' field of the HH:MM:SS time specification is allowed by the code, note
       that the poll time of the SLURM scheduler is not precise enough to guarantee dispatch of  the  job
       on  the  exact second.  The job will be eligible to start on the next poll following the specified
       time. The exact poll interval depends on the SLURM scheduler (e.g., 60 seconds  with  the  default
       sched/builtin).
        - If no time (HH:MM:SS) is specified, the default is (00:00:00).
        -  If  a  date is specified without a year (e.g., MM/DD) then the current year is assumed, unless
       the combination of MM/DD and HH:MM:SS has already passed for that year, in  which  case  the  next
       year is used.
--bell Force salloc to ring the terminal bell when the job allocation is granted (and only if stdout is a
       tty).  By default, salloc only rings the bell if the allocation  is  pending  for  more  than  ten
       seconds (and only if stdout is a tty). Also see the option --no-bell.
--comment=<string>
       An arbitrary comment.
-C, --constraint=<list>
       Specify  a list of constraints.  The constraints are features that have been assigned to the nodes
       by the slurm administrator.  The list of constraints may include multiple  features  separated  by
       ampersand  (AND) and/or vertical bar (OR) operators.  For example: --constraint="opteron&video" or
       --constraint="fast|faster".  In the first example, only nodes having both  the  feature  "opteron"
       AND  the  feature  "video"  will be used.  There is no mechanism to specify that you want one node
       with feature "opteron" and another node with feature "video" in case no node  has  both  features.
       If  only  one of a set of possible options should be used for all allocated nodes, then use the OR
       operator    and    enclose    the    options    within    square    brackets.     For     example:
       "--constraint=[rack1|rack2|rack3|rack4]" might be used to specify that all nodes must be allocated
       on a single rack of the cluster, but any of those four racks can be  used.   A  request  can  also
       specify  the number of nodes needed with some feature by appending an asterisk and count after the
       feature name.  For example "salloc --nodes=16 --constraint=graphics*4 ..."  indicates that the job
       requires  16  nodes  at  that  at  least  four  of  those  nodes must have the feature "graphics."
       Constraints with node counts may only be combined with  AND  operators.   If  no  nodes  have  the
       requested features, then the job will be rejected by the slurm job manager.
--contiguous
       If  set,  then the allocated nodes must form a contiguous set.  Not honored with the topology/tree
       or topology/3d_torus plugins, both of which can modify the node ordering.
--cores-per-socket=<cores>
       Restrict node selection to nodes with at least the specified number  of  cores  per  socket.   See
       additional information under -B option above when task/affinity plugin is enabled.
-c, --cpus-per-task=<ncpus>
       Advise the SLURM controller that ensuing job steps will require ncpus  number  of  processors  per
       task.  Without this option, the controller will just try to allocate one processor per task.

       For  instance,  consider  an  application  that  has 4 tasks, each requiring 3 processors.  If our
       cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the  controller
       might give us only 3 nodes.  However, by using the --cpus-per-task=3 options, the controller knows
       that each task requires 3 processors on the same node, and the controller will grant an allocation
       of 4 nodes, one for each of the 4 tasks.
-d, --dependency=<dependency_list>
       Defer  the  start  of  this  job  until  the specified dependencies have been satisfied completed.
       <dependency_list> is of the form  <type:job_id[:job_id][,type:job_id[:job_id]]>.   Many  jobs  can
       share  the  same  dependency and these jobs may even belong to different  users. The  value may be
       changed after job submission using the scontrol command.
-D, --chdir=<path>
       change directory to path before beginning execution.
--exclusive
       The  job allocation can not share nodes with other running jobs.  This is the opposite of --share,
       whichever option is seen last on the command line  will  be  used.  The  default  shared/exclusive
       behavior  depends  on system configuration and the partition's Shared option takes precedence over
       the job's option.
-F, --nodefile=<node file>
       Much like --nodelist, but the list is contained in a file of name node file.  The  node  names  of
       the  list  may  also  span multiple lines in the file.    Duplicate node names in the file will be
       ignored.  The order of the node names in the list is not important; the node names will be  sorted
       by SLURM.
--gid=<group>
       If  salloc  is run as root, and the --gid option is used, submit the job with group's group access
       permissions.  group may be the group name or the numerical group ID.
--gres=<list>
       Specifies a comma delimited list of generic consumable resources.  The format of each entry on the
       list  is  "name[:count[*cpu]]".   The  name  is that of the consumable resource.  The count is the
       number of those resources with a default value of 1.  The specified resources will be allocated to
       the  job  on  each  node  allocated unless "*cpu" is appended, in which case the resources will be
       allocated on a per cpu basis.  The available generic consumable resources is configurable  by  the
       system  administrator.   A  list of available generic consumable resources will be printed and the
       command  will  exit   if   the   option   argument   is   "help".    Examples   of   use   include
       "--gres=gpus:2*cpu,disk=40G" and "--gres=help".
-H, --hold
       Specify  the  job  is  to  be submitted in a held state (priority of zero).  A held job can now be
       released using scontrol to reset its priority (e.g. "scontrol release <job_id>").
-h, --help
       Display help information and exit.
--hint=<type>
       Bind tasks according to application hints

       compute_bound
              Select settings for compute bound applications: use all cores in each  socket,  one  thread
              per core

       memory_bound
              Select settings for memory bound applications: use only one core in each socket, one thread
              per core

       [no]multithread
              [don't] use extra threads with in-core  multi-threading  which  can  benefit  communication
              intensive applications

       help   show this help message
-I, --immediate[=<seconds>]
       exit  if  resources  are not available within the time period specified.  If no argument is given,
       resources must be available immediately for the request to succeed.  By  default,  --immediate  is
       off, and the command will block until resources become available.
-J, --job-name=<jobname>
       Specify a name for the job allocation. The specified name will appear along with the job id number
       when querying running jobs on the system.  The default job name  is  the  name  of  the  "command"
       specified on the command line.
--jobid=<jobid>
       Allocate resources as the specified job id.  NOTE: Only valid for user root.
-K, --kill-command[=signal]
       salloc  always  runs  a  user-specified  command once the allocation is granted.  salloc will wait
       indefinitely for that command to exit.  If you specify the --kill-command option salloc will  send
       a  signal  to your command any time that the SLURM controller tells salloc that its job allocation
       has been revoked. The job allocation can be revoked for a couple of reasons: someone used  scancel
       to  revoke  the  allocation,  or  the  allocation reached its time limit.  If you do not specify a
       signal name or number and SLURM is configured to signal the spawned command  at  job  termination,
       the default signal is SIGHUP for interactive and SIGTERM for non-interactive sessions.
-k, --no-kill
       Do  not  automatically  terminate a job of one of the nodes it has been allocated fails.  The user
       will assume the responsibilities for fault-tolerance should a node fail.  When  there  is  a  node
       failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal
       error, but with --no-kill, the job allocation will not be revoked so the user may launch  new  job
       steps on the remaining nodes in their allocation.

       By  default SLURM terminates the entire job allocation if any node fails in its range of allocated
       nodes.
-L, --licenses=<license>
       Specification of licenses (or other resources available on all nodes of the cluster) which must be
       allocated  to this job.  License names can be followed by an asterisk and count (the default count
       is one).  Multiple license names should be comma separated (e.g.  "--licenses=foo*4,bar").
-m, --distribution=
       <block|cyclic|arbitrary|plane=<options>[:block|cyclic]>
--mail-type=<type>
       Notify  user  by  email  when  certain event types occur.  Valid type values are BEGIN, END, FAIL,
       REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.
--mail-user=<user>
       User to receive email notification of state changes as defined by --mail-type.  The default  value
       is the submitting user.
--mem=<MB>
       Specify  the  real  memory required per node in MegaBytes.  Default value is DefMemPerNode and the
       maximum value is MaxMemPerNode. If configured, both of parameters can be seen using  the  scontrol
       show  config command.  This parameter would generally be used if whole nodes are allocated to jobs
       (SelectType=select/linear).   Also  see  --mem-per-cpu.   --mem  and  --mem-per-cpu  are  mutually
       exclusive.
--mem-per-cpu=<MB>
       Mimimum  memory  required  per  allocated CPU in MegaBytes.  Default value is DefMemPerCPU and the
       maximum value is MaxMemPerCPU (see exception below). If configured, both of parameters can be seen
       using  the  scontrol  show config command.  Note that if the job's --mem-per-cpu value exceeds the
       configured MaxMemPerCPU, then the user's limit will  be  treated  as  a  memory  limit  per  task;
       --mem-per-cpu  will be reduced to a value no larger than MaxMemPerCPU; --cpus-per-task will be set
       and value of --cpus-per-task multiplied by the new --mem-per-cpu value  will  equal  the  original
       --mem-per-cpu  value  specified by the user.  This parameter would generally be used if individual
       processors are allocated  to  jobs  (SelectType=select/cons_res).   Also  see  --mem.   --mem  and
       --mem-per-cpu are mutually exclusive.
--mincpus=<n>
       Specify a minimum number of logical cpus/processors per node.
-N, --nodes=<minnodes[-maxnodes]>
       Request  that a minimum of minnodes nodes be allocated to this job.  A maximum node count may also
       be specified with maxnodes.  If only one number is specified, this is used as both the minimum and
       maximum  node  count.   The  partition's  node limits supersede those of the job.  If a job's node
       limits are outside of the range permitted for its associated partition, the job will be left in  a
       PENDING  state.   This  permits  possible  execution  at a later time, when the partition limit is
       changed.  If a job node limit exceeds the number of nodes configured in  the  partition,  the  job
       will  be  rejected.   Note  that the environment variable SLURM_NNODES will be set to the count of
       nodes actually allocated to the job. See the ENVIRONMENT VARIABLES  section for more  information.
       If  -N  is  not  specified,  the  default  behavior  is  to  allocate  enough nodes to satisfy the
       requirements of the -n and -c options.  The job will be allocated as many nodes as possible within
       the  range specified and without delaying the initiation of the job.  The node count specification
       may include a numeric value followed by a suffix of "k" (multiplies numeric value by 1,024) or "m"
       (multiplies numeric value by 1,048,576).
-n, --ntasks=<number>
       salloc  does  not  launch tasks, it requests an allocation of resources and executed some command.
       This option advises the SLURM controller that job steps run within this allocation will  launch  a
       maximum of number tasks and sufficient resources are allocated to accomplish this.  The default is
       one task per node, but note that the --cpus-per-task option will change this default.
--network=<type>
       Specify the communication protocol to be used.  This option is supported on  AIX  systems.   Since
       POE  is  used  to  launch  tasks,  this  option  is  not  normally  used or is specified using the
       SLURM_NETWORK environment variable.  The interpretation of type is system dependent.  For  systems
       with  an  IBM  Federation  switch,  the  following  comma-separated and case insensitive types are
       recognized: IP (the default is user-space), SN_ALL, SN_SINGLE, BULK_XFER and adapter names   (e.g.
       SNI0  and  SNI1).   For  more information, on IBM systems see poe documentation on the environment
       variables MP_EUIDEVICE and MP_USE_BULK_XFER.  Note that only four jobs steps may be active at once
       on a node with the BULK_XFER option due to limitations in the Federation switch driver.
--nice[=adjustment]
       Run  the  job  with  an  adjusted  scheduling priority within SLURM.  With no adjustment value the
       scheduling priority is decreased by 100. The adjustment range is from -10000 (highest priority) to
       10000  (lowest  priority).  Only  privileged  users  can specify a negative adjustment. NOTE: This
       option is presently ignored if SchedulerType=sched/wiki or SchedulerType=sched/wiki2.
--ntasks-per-core=<ntasks>
       Request the maximum ntasks be invoked on each core.  Meant to be used with  the  --ntasks  option.
       Related  to  --ntasks-per-node  except  at  the  core level instead of the node level.  Masks will
       automatically be generated to bind the tasks to specific core unless --cpu_bind=none is specified.
       NOTE:     This    option    is    not    supported    unless    SelectTypeParameters=CR_Core    or
       SelectTypeParameters=CR_Core_Memory is configured.
--ntasks-per-socket=<ntasks>
       Request the maximum ntasks be invoked on each socket.  Meant to be used with the --ntasks  option.
       Related  to  --ntasks-per-node  except  at the socket level instead of the node level.  Masks will
       automatically be generated to bind  the  tasks  to  specific  sockets  unless  --cpu_bind=none  is
       specified.    NOTE:   This  option  is  not  supported  unless  SelectTypeParameters=CR_Socket  or
       SelectTypeParameters=CR_Socket_Memory is configured.
--ntasks-per-node=<ntasks>
       Request the maximum ntasks be invoked on each node.  Meant to be used  with  the  --nodes  option.
       This  is  related to --cpus-per-task=ncpus, but does not require knowledge of the actual number of
       cpus on each node.  In some cases, it is more convenient to be able to request that no more than a
       specific  number  of  tasks be invoked on each node.  Examples of this include submitting a hybrid
       MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while  allowing  the
       OpenMP  portion  to  utilize  all  of  the parallelism present in the node, or submitting a single
       setup/cleanup/monitoring job to each node of a pre-existing allocation as one step in a larger job
       script.
--no-bell
       Silence salloc's use of the terminal bell. Also see the option --bell.
--no-shell
       immediately  exit  after  allocating  resources, without running a command. However, the SLURM job
       will still be created and will remain active and will own the allocated resources as long as it is
       active.   You  will have a SLURM job id with no associated processes or tasks. You can submit srun
       commands against this resource allocation, if you specify the --jobid= option with the job  id  of
       this SLURM job.  Or, this can be used to temporarily reserve a set of resources so that other jobs
       cannot use them for some period of time.  (Note that the  SLURM  job  is  subject  to  the  normal
       constraints  on  jobs,  including  time  limits, so that eventually the job will terminate and the
       resources will be freed, or you can terminate the job manually using the scancel command.)
-O, --overcommit
       Overcommit resources.  Normally, salloc will allocate  one  task  per  processor.   By  specifying
       --overcommit  you  are explicitly allowing more than one task per processor.  However no more than
       MAX_TASKS_PER_NODE tasks are permitted to execute per node.
-p, --partition=<partition_names>
       Request a specific partition for the resource allocation.  If not specified, the default  behavior
       is  to  allow  the  slurm  controller  to select the default partition as designated by the system
       administrator. If the job can use more than one partition, specify their names in a comma separate
       list and the one offering earliest initiation will be used.
-Q, --quiet
       Suppress informational messages from salloc. Errors will still be displayed.
--qos=<qos>
       Request a quality of service for the job.  QOS values can be defined for each user/cluster/account
       association in the SLURM database.  Users will be limited to their association's  defined  set  of
       qos's  when  the  SLURM  configuration parameter, AccountingStorageEnforce, includes "qos" in it's
       definition.
--reservation=<name>
       Allocate resources for the job from the named reservation.
-s, --share
       The job allocation can share nodes with other running jobs.  This is the opposite of  --exclusive,
       whichever  option  is  seen  last  on  the command line will be used. The default shared/exclusive
       behavior depends on system configuration and the partition's Shared option takes  precedence  over
       the  job's option.  This option may result the allocation being granted sooner than if the --share
       option was not set and allow higher system utilization, but application  performance  will  likely
       suffer due to competition for resources within a node.
--sockets-per-node=<sockets>
       Restrict  node  selection  to nodes with at least the specified number of sockets.  See additional
       information under -B option above when task/affinity plugin is enabled.
-t, --time=<time>
       Set a limit on the total run time of the job allocation.  If the requested time limit exceeds  the
       partition's  time  limit,  the  job  will be left in a PENDING state (possibly indefinitely).  The
       default time limit is the partition's time limit.  When the time limit is reached, the  each  task
       in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by
       the SLURM configuration parameter KillWait.  A time limit of zero requests that no time  limit  be
       imposed.   Acceptable  time formats include "minutes", "minutes:seconds", "hours:minutes:seconds",
       "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--threads-per-core=<threads>
       Restrict node selection to nodes with at least the specified number  of  threads  per  core.   See
       additional information under -B option above when task/affinity plugin is enabled.
--time-min=<time>
       Set  a minimum time limit on the job allocation.  If specified, the job may have it's --time limit
       lowered to a value no lower than --time-min if doing so permits the job to begin execution earlier
       than  otherwise  possible.   The  job's  time limit will not be changed after the job is allocated
       resources.  This is performed by a backfill scheduling algorithm to allocate  resources  otherwise
       reserved  for higher priority jobs.  Acceptable time formats include "minutes", "minutes:seconds",
       "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--tmp=<MB>
       Specify a minimum amount of temporary disk space.
-u, --usage
       Display brief help message and exit.
--uid=<user>
       Attempt to submit and/or run a job as user instead of the invoking user id.  The  invoking  user's
       credentials  will  be used to check access permissions for the target partition. User root may use
       this option to run jobs as a normal user in a RootOnly partition for  example.  If  run  as  root,
       salloc  will  drop  its permissions to the uid specified after node allocation is successful. user
       may be the user name or numerical user ID.
-V, --version
       Display version information and exit.
-v, --verbose
       Increase the verbosity of salloc's informational messages.  Multiple -v's  will  further  increase
       salloc's verbosity.  By default only errors will be displayed.
-W, --wait=<seconds>
       This option has been replaced by --immediate=<seconds>.
-w, --nodelist=<node name list>
       Request  a  specific  list  of node names.  The list may be specified as a comma-separated list of
       node names, or a range of node names (e.g. mynode[1-5,7,...]).  Duplicate node names in  the  list
       will be ignored.  The order of the node names in the list is not important; the node names will be
       sorted by SLURM.
--wait-all-nodes=<value>
       Controls when the execution of the command begins.  By default the job  will  begin  execution  as
       soon as the allocation is made.

       0    Begin execution as soon as allocation can be made.  Do not wait for all nodes to be ready for
            use (i.e. booted).

       1    Do not begin execution until all nodes are ready for use.
--wckey=<wckey>
       Specify wckey to be used with job.  If TrackWCKey=no (default) in the  slurm.conf  this  value  is
       ignored.
-x, --exclude=<node name list>
       Explicitly exclude certain nodes from the resources granted to the job.

The following options support Blue Gene systems, but may be applicable to other systems as well.
--blrts-image=<path>
       Path to blrts image for bluegene block.  BGL only.  Default from blugene.conf if not set.
--cnload-image=<path>
       Path to compute node image for bluegene block.  BGP only.  Default from blugene.conf if not set.
--conn-type=<type>
       Require  the  partition  connection  type to be of a certain type.  On Blue Gene the acceptable of
       type are MESH, TORUS and NAV.  If NAV, or if not set, then SLURM will try  to  fit  a  TORUS  else
       MESH.   You should not normally set this option.  SLURM will normally allocate a TORUS if possible
       for a given geometry.  If running on a BGP system and wanting to run  in  HTC  mode  (only  for  1
       midplane  and below).  You can use HTC_S for SMP, HTC_D for Dual, HTC_V for virtual node mode, and
       HTC_L for Linux mode.  A comma separated lists of connection types may be specified, one for  each
       dimension.
-g, --geometry=<XxYxZ>
       Specify  the  geometry requirements for the job. The three numbers represent the required geometry
       giving dimensions in the X, Y and Z directions. For example "--geometry=2x3x4", specifies a  block
       of nodes having 2 x 3 x 4 = 24 nodes (actually base partitions on Blue Gene).
--ioload-image=<path>
       Path to io image for bluegene block.  BGP only.  Default from blugene.conf if not set.
--linux-image=<path>
       Path to linux image for bluegene block.  BGL only.  Default from blugene.conf if not set.
--mloader-image=<path>
       Path to mloader image for bluegene block.  Default from blugene.conf if not set.
-R, --no-rotate
       Disables  rotation  of  the  job's  requested  geometry  in order to fit an appropriate block.  By
       default the specified geometry can rotate in three dimensions.
--ramdisk-image=<path>
       Path to ramdisk image for bluegene block.  BGL only.  Default from blugene.conf if not set.
--reboot
       Force the allocated nodes to reboot before starting the job.