sbatch(1) - Submit a batch script to SLURM
-A, --account=<account>
       Charge resources used by this job to specified account.  The account is an arbitrary  string.  The
       account name may be changed after job submission using the scontrol command.
--acctg-freq=<seconds>
       Define   the   job   accounting   sampling   interval.    This   can   be  used  to  override  the
       JobAcctGatherFrequency parameter in SLURM's configuration  file,  slurm.conf.   A  value  of  zero
       disables  the  periodic  job  sampling and provides accounting information only on job termination
       (reducing SLURM interference with the job).
-B --extra-node-info=<sockets[:cores[:threads]]>
       Request a specific allocation of resources with details as to the number and type of computational
       resources within a cluster: number of sockets (or physical processors) per node, cores per socket,
       and threads per core.  The total amount of resources being requested is the product of all of  the
       terms.   Each  value  specified  is  considered  a  minimum.   An  asterisk  (*)  can be used as a
       placeholder indicating that all available resources of that type are  to  be  utilized.   As  with
       nodes, the individual levels can also be specified in separate options if desired:
           --sockets-per-node=<sockets>
           --cores-per-socket=<cores>
           --threads-per-core=<threads>
       If  task/affinity  plugin  is  enabled,  then  specifying an allocation in this manner also sets a
       default --cpu_bind option of threads if the -B option  specifies  a  thread  count,  otherwise  an
       option  of  cores  if a core count is specified, otherwise an option of sockets.  If SelectType is
       configured to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket,  or
       CR_Socket_Memory  for this option to be honored.  This option is not supported on BlueGene systems
       (select/bluegene plugin is configured).  If not specified, the  scontrol  show  job  will  display
       'ReqS:C:T=*:*:*'.
--begin=<time>
       Submit  the batch script to the SLURM controller immediately, like normal, but tell the controller
       to defer the allocation of the job until the specified time.

       Time may be of the form HH:MM:SS to run a job at a specific time of day  (seconds  are  optional).
       (If  that time is already past, the next day is assumed.)  You may also specify midnight, noon, or
       teatime (4pm) and you can have a time-of-day suffixed with AM or PM for running in the morning  or
       the  evening.   You  can  also  say what day the job will be run, by specifying a date of the form
       MMDDYY  or  MM/DD/YY  YYYY-MM-DD.   Combine   date   and   time   using   the   following   format
       YYYY-MM-DD[THH:MM[:SS]]. You can also give times like now + count time-units, where the time-units
       can be seconds (default), minutes, hours, days, or weeks and you can tell SLURM  to  run  the  job
       today with the keyword today and to run the job tomorrow with the keyword tomorrow.  The value may
       be changed after job submission using the scontrol command.  For example:
          --begin=16:00
          --begin=now+1hour
          --begin=now+60           (seconds by default)
          --begin=2010-01-20T12:34:00

       Notes on date/time specifications:
        - Although the 'seconds' field of the HH:MM:SS time specification is allowed by  the  code,  note
       that  the  poll time of the SLURM scheduler is not precise enough to guarantee dispatch of the job
       on the exact second.  The job will be eligible to start on the next poll following  the  specified
       time.  The  exact  poll interval depends on the SLURM scheduler (e.g., 60 seconds with the default
       sched/builtin).
        - If no time (HH:MM:SS) is specified, the default is (00:00:00).
        - If a date is specified without a year (e.g., MM/DD) then the current year  is  assumed,  unless
       the  combination  of  MM/DD  and HH:MM:SS has already passed for that year, in which case the next
       year is used.
--checkpoint=<time>
       Specifies the interval between creating checkpoints of the job step.  By  default,  the  job  step
       will  have  no checkpoints created.  Acceptable time formats include "minutes", "minutes:seconds",
       "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--checkpoint-dir=<directory>
       Specifies the directory into which the job or job step's checkpoint should be written (used by the
       checkpoint/blcrm  and  checkpoint/xlch  plugins  only).   The default value is the current working
       directory.   Checkpoint   files   will   be   of   the   form   "<job_id>.ckpt"   for   jobs   and
       "<job_id>.<step_id>.ckpt" for job steps.
--comment=<string>
       An arbitrary comment enclosed in double quotes if using spaces or some special characters.
-C, --constraint=<list>
       Specify  a list of constraints.  The constraints are features that have been assigned to the nodes
       by the slurm administrator.  The list of constraints may include multiple  features  separated  by
       ampersand  (AND) and/or vertical bar (OR) operators.  For example: --constraint="opteron&video" or
       --constraint="fast|faster".  In the first example, only nodes having both  the  feature  "opteron"
       AND  the  feature  "video"  will be used.  There is no mechanism to specify that you want one node
       with feature "opteron" and another node with feature "video" in case no node  has  both  features.
       If  only  one of a set of possible options should be used for all allocated nodes, then use the OR
       operator    and    enclose    the    options    within    square    brackets.     For     example:
       "--constraint=[rack1|rack2|rack3|rack4]" might be used to specify that all nodes must be allocated
       on a single rack of the cluster, but any of those four racks can be  used.   A  request  can  also
       specify  the number of nodes needed with some feature by appending an asterisk and count after the
       feature name.  For example "sbatch --nodes=16 --constraint=graphics*4 ..."  indicates that the job
       requires  16  nodes  and  that  at  least  four  of  those nodes must have the feature "graphics."
       Constraints with node counts may only be combined with  AND  operators.   If  no  nodes  have  the
       requested features, then the job will be rejected by the slurm job manager.
--contiguous
       If  set,  then the allocated nodes must form a contiguous set.  Not honored with the topology/tree
       or topology/3d_torus plugins, both of which can modify the node ordering.
--cores-per-socket=<cores>
       Restrict node selection to nodes with at least the specified number  of  cores  per  socket.   See
       additional information under -B option above when task/affinity plugin is enabled.
-c, --cpus-per-task=<ncpus>
       Advise the SLURM controller that ensuing job steps will require ncpus  number  of  processors  per
       task.  Without this option, the controller will just try to allocate one processor per task.

       For  instance,  consider  an  application  that  has 4 tasks, each requiring 3 processors.  If our
       cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the  controller
       might give us only 3 nodes.  However, by using the --cpus-per-task=3 options, the controller knows
       that each task requires 3 processors on the same node, and the controller will grant an allocation
       of 4 nodes, one for each of the 4 tasks.
-d, --dependency=<dependency_list>
       Defer  the  start  of  this  job  until  the specified dependencies have been satisfied completed.
       <dependency_list> is of the form  <type:job_id[:job_id][,type:job_id[:job_id]]>.   Many  jobs  can
       share  the  same  dependency and these jobs may even belong to different  users. The  value may be
       changed after job submission using the scontrol command.
-D, --workdir=<directory>
       Set the working directory of the batch script to directory before it is executed.
-e, --error=<filename pattern>
       Instruct SLURM to connect the batch script's standard error directly to the file name specified in
       the "filename pattern".  By default both standard output and standard error are directed to a file
       of  the  name  "slurm-%j.out", where the "%j" is replaced with the job allocation number.  See the
       --input option for filename specification options.
--exclusive
       The job allocation can not share nodes with other running jobs.  This is the opposite of  --share,
       whichever  option  is  seen  last  on  the command line will be used. The default shared/exclusive
       behavior depends on system configuration and the partition's Shared option takes  precedence  over
       the job's option.
--export=<environment variables | ALL | NONE>
       Identify  which  environment  variables  are  propagated  to  the batch job.  Multiple environment
       variable names should be  comma  separated.   Environment  variable  names  may  be  specified  to
       propagate the current value of those variables (e.g. "--export=EDITOR") or specific values for the
       variables may be exported (e.g.. "--export=EDITOR=/bin/vi").  This option  particularly  important
       for jobs that are submitted on one cluster and execute on a different cluster (e.g. with different
       paths). By default all environment variables are propagaged. If the argument is NONE  or  specific
       environment  variable  names,  then the --get-user-env option will implicitly be set to load other
       environment variables based upon the user's configuration on the cluster which executes the job.
-F, --nodefile=<node file>
       Much like --nodelist, but the list is contained in a file of name node file.  The  node  names  of
       the  list  may  also  span multiple lines in the file.    Duplicate node names in the file will be
       ignored.  The order of the node names in the list is not important; the node names will be  sorted
       by SLURM.
--gid=<group>
       If  sbatch  is run as root, and the --gid option is used, submit the job with group's group access
       permissions.  group may be the group name or the numerical group ID.
--gres=<list>
       Specifies a comma delimited list of generic consumable resources.  The format of each entry on the
       list  is  "name[:count[*cpu]]".   The  name  is that of the consumable resource.  The count is the
       number of those resources with a default value of 1.  The specified resources will be allocated to
       the  job  on  each  node  allocated unless "*cpu" is appended, in which case the resources will be
       allocated on a per cpu basis.  The available generic consumable resources is configurable  by  the
       system  administrator.   A  list of available generic consumable resources will be printed and the
       command  will  exit   if   the   option   argument   is   "help".    Examples   of   use   include
       "--gres=gpus:2*cpu,disk=40G" and "--gres=help".
-H, --hold
       Specify  the  job  is  to  be submitted in a held state (priority of zero).  A held job can now be
       released using scontrol to reset its priority (e.g. "scontrol release <job_id>").
-h, --help
       Display help information and exit.
--hint=<type>
       Bind tasks according to application hints

       compute_bound
              Select settings for compute bound applications: use all cores in each  socket,  one  thread
              per core

       memory_bound
              Select settings for memory bound applications: use only one core in each socket, one thread
              per core

       [no]multithread
              [don't] use extra threads with in-core  multi-threading  which  can  benefit  communication
              intensive applications

       help   show this help message
-I, --immediate
       The  batch script will only be submitted to the controller if the resources necessary to grant its
       job allocation are immediately available.  If the job allocation will have to wait in a  queue  of
       pending jobs, the batch script will not be submitted.
-i, --input=<filename pattern>
       Instruct SLURM to connect the batch script's standard input directly to the file name specified in
       the "filename pattern".

       By default, "/dev/null" is open on the batch script's standard input and both standard output  and
       standard  error are directed to a file of the name "slurm-%j.out", where the "%j" is replaced with
       the job allocation number, as described below.

       The filename pattern may contain one or more replacement symbols, which are  a  percent  sign  "%"
       followed by a letter (e.g. %j).

       Supported replacement symbols are:
          %j     Job allocation number.
          %N     Node  name.   Only  one file is created, so %N will be replaced by the name of the first
                 node in the job, which is the one that runs the script.
-J, --job-name=<jobname>
       Specify a name for the job allocation. The specified name will appear along with the job id number
       when  querying  running  jobs  on the system. The default is the name of the batch script, or just
       "sbatch" if the script is read on sbatch's standard input.
--jobid=<jobid>
       Allocate resources as the specified job id.  NOTE: Only valid for user root.
-k, --no-kill
       Do not automatically terminate a job of one of the nodes it has been allocated  fails.   The  user
       will  assume  the  responsibilities  for fault-tolerance should a node fail.  When there is a node
       failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal
       error,  but  with --no-kill, the job allocation will not be revoked so the user may launch new job
       steps on the remaining nodes in their allocation.

       By default SLURM terminates the entire job allocation if any node fails in its range of  allocated
       nodes.
-L, --licenses=<license>
       Specification of licenses (or other resources available on all nodes of the cluster) which must be
       allocated to this job.  License names can be followed by an asterisk and count (the default  count
       is one).  Multiple license names should be comma separated (e.g.  "--licenses=foo*4,bar").
-M, --clusters=<string>
       Clusters  to  issue  commands to.  Multiple cluster names may be comma separated.  The job will be
       submitted to the one cluster providing the earliest expected  job  initiation  time.  The  default
       value  is  the  current  cluster.  A  value  of 'all' will query to run on all clusters.  Note the
       --export option to control environment variables exported between clusters.
-m, --distribution=
       <block|cyclic|arbitrary|plane=<options>[:block|cyclic]>

       Specify  alternate  distribution  methods  for  remote  processes.   In  sbatch,  this  only  sets
       environment  variables  that  will  be used by subsequent srun requests.  This option controls the
       assignment of tasks to the nodes on which resources have been allocated, and the  distribution  of
       those  resources  to  tasks for binding (task affinity). The first distribution method (before the
       ":") controls the distribution of resources across nodes. The optional second distribution  method
       (after  the  ":")  controls the distribution of resources across sockets within a node.  Note that
       with select/cons_res, the number of cpus allocated on each socket and node may be different. Refer
       to  http://www.schedmd.com/slurmdocs/mc_support.html  for more information on resource allocation,
       assignment of tasks to nodes, and binding of tasks to CPUs.

       First distribution method:
       block  The block distribution method will distribute tasks to a node such that  consecutive  tasks
              share  a  node.  For  example,  consider an allocation of three nodes each with two cpus. A
              four-task block distribution request will distribute those tasks to the  nodes  with  tasks
              one  and  two  on the first node, task three on the second node, and task four on the third
              node.  Block distribution is the default behavior if the number of tasks exceeds the number
              of allocated nodes.
       cyclic The  cyclic distribution method will distribute tasks to a node such that consecutive tasks
              are distributed over consecutive nodes (in a round-robin fashion). For example, consider an
              allocation  of three nodes each with two cpus. A four-task cyclic distribution request will
              distribute those tasks to the nodes with tasks one and four on the first node, task two  on
              the  second  node,  and  task  three  on  the  third  node.   Note  that when SelectType is
              select/cons_res, the same  number  of  CPUs  may  not  be  allocated  on  each  node.  Task
              distribution will be round-robin among all the nodes with CPUs yet to be assigned to tasks.
              Cyclic distribution is the default behavior if the number of tasks is no  larger  than  the
              number of allocated nodes.
       plane  The  tasks  are  distributed  in  blocks of a specified size.  The options include a number
              representing the size of the task block.  This is followed by an optional specification  of
              the  task distribution scheme within a block of tasks and between the blocks of tasks.  For
              more details (including examples and diagrams), please see
              http://www.schedmd.com/slurmdocs/mc_support.html
              and
              http://www.schedmd.com/slurmdocs/dist_plane.html
       arbitrary
              The arbitrary method of distribution will allocate processes in-order  as  listed  in  file
              designated  by the environment variable SLURM_HOSTFILE.  If this variable is listed it will
              override any other method specified.  If not set the method will default to block.   Inside
              the  hostfile  must contain at minimum the number of hosts requested and be one per line or
              comma separated.  If specifying a task count (-n, --ntasks=<number>), your  tasks  will  be
              laid out on the nodes in the order of the file.

       Second distribution method:
       block  The  block distribution method will distribute tasks to sockets such that consecutive tasks
              share a socket.
       cyclic The cyclic distribution method will distribute tasks to sockets such that consecutive tasks
              are distributed over consecutive sockets (in a round-robin fashion).
--mail-type=<type>
       Notify  user  by  email  when  certain event types occur.  Valid type values are BEGIN, END, FAIL,
       REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.
--mail-user=<user>
       User to receive email notification of state changes as defined by --mail-type.  The default  value
       is the submitting user.
--mem=<MB>
       Specify  the  real  memory required per node in MegaBytes.  Default value is DefMemPerNode and the
       maximum value is MaxMemPerNode. If configured, both of parameters can be seen using  the  scontrol
       show  config command.  This parameter would generally be used if whole nodes are allocated to jobs
       (SelectType=select/linear).   Also  see  --mem-per-cpu.   --mem  and  --mem-per-cpu  are  mutually
       exclusive.
--mem-per-cpu=<MB>
       Mimimum  memory  required  per  allocated CPU in MegaBytes.  Default value is DefMemPerCPU and the
       maximum value is MaxMemPerCPU (see exception below). If configured, both of parameters can be seen
       using  the  scontrol  show config command.  Note that if the job's --mem-per-cpu value exceeds the
       configured MaxMemPerCPU, then the user's limit will  be  treated  as  a  memory  limit  per  task;
       --mem-per-cpu  will be reduced to a value no larger than MaxMemPerCPU; --cpus-per-task will be set
       and value of --cpus-per-task multiplied by the new --mem-per-cpu value  will  equal  the  original
       --mem-per-cpu  value  specified by the user.  This parameter would generally be used if individual
       processors are allocated  to  jobs  (SelectType=select/cons_res).   Also  see  --mem.   --mem  and
       --mem-per-cpu are mutually exclusive.
--mincpus=<n>
       Specify a minimum number of logical cpus/processors per node.
-N, --nodes=<minnodes[-maxnodes]>
       Request  that a minimum of minnodes nodes be allocated to this job.  A maximum node count may also
       be specified with maxnodes.  If only one number is specified, this is used as both the minimum and
       maximum  node  count.   The  partition's  node limits supersede those of the job.  If a job's node
       limits are outside of the range permitted for its associated partition, the job will be left in  a
       PENDING  state.   This  permits  possible  execution  at a later time, when the partition limit is
       changed.  If a job node limit exceeds the number of nodes configured in  the  partition,  the  job
       will  be  rejected.   Note  that the environment variable SLURM_NNODES will be set to the count of
       nodes actually allocated to the job. See the ENVIRONMENT VARIABLES  section for more  information.
       If  -N  is  not  specified,  the  default  behavior  is  to  allocate  enough nodes to satisfy the
       requirements of the -n and -c options.  The job will be allocated as many nodes as possible within
       the  range specified and without delaying the initiation of the job.  The node count specification
       may include a numeric value followed by a suffix of "k" (multiplies numeric value by 1,024) or "m"
       (multiplies numeric value by 1,048,576).
-n, --ntasks=<number>
       sbatch  does  not launch tasks, it requests an allocation of resources and submits a batch script.
       This option advises the SLURM controller that job steps run within the allocation  will  launch  a
       maximum  of  number  tasks  and  to provide for sufficient resources.  The default is one task per
       node, but note that the --cpus-per-task option will change this default.
--network=<type>
       Specify the communication protocol to be used.  This option is supported on  AIX  systems.   Since
       POE  is  used  to  launch  tasks,  this  option  is  not  normally  used or is specified using the
       SLURM_NETWORK environment variable.  The interpretation of type is system dependent.  For  systems
       with  an  IBM  Federation  switch,  the  following  comma-separated and case insensitive types are
       recognized: IP (the default is user-space), SN_ALL, SN_SINGLE, BULK_XFER and adapter names   (e.g.
       SNI0  and  SNI1).   For  more information, on IBM systems see poe documentation on the environment
       variables MP_EUIDEVICE and MP_USE_BULK_XFER.  Note that only four jobs steps may be active at once
       on a node with the BULK_XFER option due to limitations in the Federation switch driver.
--nice[=adjustment]
       Run  the  job  with  an  adjusted  scheduling priority within SLURM.  With no adjustment value the
       scheduling priority is decreased by 100. The adjustment range is from -10000 (highest priority) to
       10000  (lowest  priority).  Only  privileged  users  can specify a negative adjustment. NOTE: This
       option is presently ignored if SchedulerType=sched/wiki or SchedulerType=sched/wiki2.
--no-requeue
       Specifies that the batch job should not be requeued after node failure.  Setting this option  will
       prevent  system  administrators from being able to restart the job (for example, after a scheduled
       downtime).  When a job is requeued, the batch script is initiated from its  beginning.   Also  see
       the --requeue option.  The JobRequeue configuration parameter controls the default behavior on the
       cluster.
--ntasks-per-core=<ntasks>
       Request the maximum ntasks be invoked on each core.  Meant to be used with  the  --ntasks  option.
       Related  to  --ntasks-per-node  except  at  the  core level instead of the node level.  Masks will
       automatically be generated to bind the tasks to specific core unless --cpu_bind=none is specified.
       NOTE:     This    option    is    not    supported    unless    SelectTypeParameters=CR_Core    or
       SelectTypeParameters=CR_Core_Memory is configured.
--ntasks-per-socket=<ntasks>
       Request the maximum ntasks be invoked on each socket.  Meant to be used with the --ntasks  option.
       Related  to  --ntasks-per-node  except  at the socket level instead of the node level.  Masks will
       automatically be generated to bind  the  tasks  to  specific  sockets  unless  --cpu_bind=none  is
       specified.    NOTE:   This  option  is  not  supported  unless  SelectTypeParameters=CR_Socket  or
       SelectTypeParameters=CR_Socket_Memory is configured.
--ntasks-per-node=<ntasks>
       Request the maximum ntasks be invoked on each node.  Meant to be used  with  the  --nodes  option.
       This  is  related to --cpus-per-task=ncpus, but does not require knowledge of the actual number of
       cpus on each node.  In some cases, it is more convenient to be able to request that no more than a
       specific  number  of  tasks be invoked on each node.  Examples of this include submitting a hybrid
       MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while  allowing  the
       OpenMP  portion  to  utilize  all  of  the parallelism present in the node, or submitting a single
       setup/cleanup/monitoring job to each node of a pre-existing allocation as one step in a larger job
       script.
-O, --overcommit
       Overcommit  resources.   Normally,  sbatch  will  allocate  one task per processor.  By specifying
       --overcommit you are explicitly allowing more than one task per processor.  However no  more  than
       MAX_TASKS_PER_NODE tasks are permitted to execute per node.
-o, --output=<filename pattern>
       Instruct  SLURM  to connect the batch script's standard output directly to the file name specified
       in the "filename pattern".  By default both standard output and standard error are directed  to  a
       file  of  the name "slurm-%j.out", where the "%j" is replaced with the job allocation number.  See
       the --input option for filename specification options.
--open-mode=append|truncate
       Open the output and error files using append or truncate mode as specified.  The default value  is
       specified by the system configuration parameter JobFileAppend.
-p, --partition=<partition_names>
       Request  a specific partition for the resource allocation.  If not specified, the default behavior
       is to allow the slurm controller to select the default  partition  as  designated  by  the  system
       administrator. If the job can use more than one partition, specify their names in a comma separate
       list and the one offering earliest initiation will be used.
--propagate[=rlimits]
       Allows users to specify which of the modifiable (soft) resource limits to propagate to the compute
       nodes  and  apply  to  their  jobs.  If rlimits is not specified, then all resource limits will be
       propagated.  The following rlimit names are supported by Slurm (although some options may  not  be
       supported on some systems):
       ALL       All limits listed below
       AS        The maximum address space for a process
       CORE      The maximum size of core file
       CPU       The maximum amount of CPU time
       DATA      The maximum size of a process's data segment
       FSIZE     The  maximum  size  of  files created. Note that if the user sets FSIZE to less than the
                 current size of the slurmd.log, job launches will fail with a 'File size limit exceeded'
                 error.
       MEMLOCK   The maximum size that may be locked into memory
       NOFILE    The maximum number of open files
       NPROC     The maximum number of processes available
       RSS       The maximum resident set size
       STACK     The maximum stack size
-Q, --quiet
       Suppress informational messages from sbatch. Errors will still be displayed.
--qos=<qos>
       Request a quality of service for the job.  QOS values can be defined for each user/cluster/account
       association in the SLURM database.  Users will be limited to their association's  defined  set  of
       qos's  when  the  SLURM  configuration parameter, AccountingStorageEnforce, includes "qos" in it's
       definition.
--requeue
       Specifies that the batch job should be requeued after node failure.  When a job is  requeued,  the
       batch  script  is initiated from its beginning.  Also see the --no-requeue option.  The JobRequeue
       configuration parameter controls the default behavior on the cluster.
--reservation=<name>
       Allocate resources for the job from the named reservation.
-s, --share
       The job allocation can share nodes with other running jobs.  This is the opposite of  --exclusive,
       whichever  option  is  seen  last  on  the command line will be used. The default shared/exclusive
       behavior depends on system configuration and the partition's Shared option takes  precedence  over
       the  job's option.  This option may result the allocation being granted sooner than if the --share
       option was not set and allow higher system utilization, but application  performance  will  likely
       suffer due to competition for resources within a node.
--sockets-per-node=<sockets>
       Restrict  node  selection  to nodes with at least the specified number of sockets.  See additional
       information under -B option above when task/affinity plugin is enabled.
-t, --time=<time>
       Set a limit on the total run time of the job allocation.  If the requested time limit exceeds  the
       partition's  time  limit,  the  job  will be left in a PENDING state (possibly indefinitely).  The
       default time limit is the partition's time limit.  When the time limit is reached,  each  task  in
       each  job  step is sent SIGTERM followed by SIGKILL.  The interval between signals is specified by
       the SLURM configuration parameter KillWait.  A time limit of zero requests that no time  limit  be
       imposed.   Acceptable  time formats include "minutes", "minutes:seconds", "hours:minutes:seconds",
       "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--tasks-per-node=<n>
       Specify the number of tasks to be launched per node.  Equivalent to --ntasks-per-node.
--threads-per-core=<threads>
       Restrict node selection to nodes with at least the specified number  of  threads  per  core.   See
       additional information under -B option above when task/affinity plugin is enabled.
--time-min=<time>
       Set  a minimum time limit on the job allocation.  If specified, the job may have it's --time limit
       lowered to a value no lower than --time-min if doing so permits the job to begin execution earlier
       than  otherwise  possible.   The  job's  time limit will not be changed after the job is allocated
       resources.  This is performed by a backfill scheduling algorithm to allocate  resources  otherwise
       reserved  for higher priority jobs.  Acceptable time formats include "minutes", "minutes:seconds",
       "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
--tmp=<MB>
       Specify a minimum amount of temporary disk space.
-u, --usage
       Display brief help message and exit.
--uid=<user>
       Attempt to submit and/or run a job as user instead of the invoking user id.  The  invoking  user's
       credentials  will  be used to check access permissions for the target partition. User root may use
       this option to run jobs as a normal user in a RootOnly partition for  example.  If  run  as  root,
       sbatch  will  drop  its permissions to the uid specified after node allocation is successful. user
       may be the user name or numerical user ID.
-V, --version
       Display version information and exit.
-v, --verbose
       Increase the verbosity of sbatch's informational messages.  Multiple -v's  will  further  increase
       sbatch's verbosity.  By default only errors will be displayed.
-w, --nodelist=<node name list>
       Request  a  specific  list  of node names.  The list may be specified as a comma-separated list of
       node names, or a range of node names (e.g. mynode[1-5,7,...]).  Duplicate node names in  the  list
       will be ignored.  The order of the node names in the list is not important; the node names will be
       sorted by SLURM.
--wait-all-nodes=<value>
       Controls when the execution of the command begins.  By default the job  will  begin  execution  as
       soon as the allocation is made.
       0    Begin execution as soon as allocation can be made.  Do not wait for all nodes to be ready for
            use (i.e. booted).
       1    Do not begin execution until all nodes are ready for use.
--wckey=<wckey>
       Specify wckey to be used with job.  If TrackWCKey=no (default) in the  slurm.conf  this  value  is
       ignored.
--wrap=<command string>
       Sbatch  will  wrap  the  specified  command  string in a simple "sh" shell script, and submit that
       script to the slurm controller.  When --wrap is used, a script  name  and  arguments  may  not  be
       specified on the command line; instead the sbatch-generated wrapper script is used.
-x, --exclude=<node name list>
       Explicitly exclude certain nodes from the resources granted to the job.

The following options support Blue Gene systems, but may be applicable to other systems as well.
--blrts-image=<path>
       Path  to  Blue  GeneL Run Time Supervisor, or blrts, image for bluegene block.  BGL only.  Default
       from blugene.conf if not set.
--cnload-image=<path>
       Path to compute node image for bluegene block.  BGP only.  Default from blugene.conf if not set.
--conn-type=<type>
       Require the partition connection type to be of a certain type.  On Blue  Gene  the  acceptable  of
       type  are  MESH,  TORUS  and  NAV.  If NAV, or if not set, then SLURM will try to fit a TORUS else
       MESH.  You should not normally set this option.  SLURM will normally allocate a TORUS if  possible
       for  a  given  geometry.   If  running  on a BGP system and wanting to run in HTC mode (only for 1
       midplane and below).  You can use HTC_S for SMP, HTC_D for Dual, HTC_V for virtual node mode,  and
       HTC_L  for Linux mode.  A comma separated lists of connection types may be specified, one for each
       dimension.
-g, --geometry=<XxYxZ>
       Specify the geometry requirements for the job. The three numbers represent the  required  geometry
       giving  dimensions in the X, Y and Z directions. For example "--geometry=2x3x4", specifies a block
       of nodes having 2 x 3 x 4 = 24 nodes (actually base partitions on Blue Gene).
--ioload-image=<path>
       Path to io image for bluegene block.  BGP only.  Default from blugene.conf if not set.
--linux-image=<path>
       Path to linux image for bluegene block.  BGL only.  Default from blugene.conf if not set.
--mloader-image=<path>
       Path to mloader image for bluegene block.  Default from blugene.conf if not set.
-R, --no-rotate
       Disables rotation of the job's requested geometry in  order  to  fit  an  appropriate  block.   By
       default the specified geometry can rotate in three dimensions.
--ramdisk-image=<path>
       Path to ramdisk image for bluegene block.  BGL only.  Default from blugene.conf if not set.
--reboot
       Force the allocated nodes to reboot before starting the job.