Obtain a SLURM job allocation (a set of nodes), execute a command, and then release the allocation when the command is finished
|
-t, --time=<time>
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the
partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The
default time limit is the partition's time limit. When the time limit is reached, the each task
in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by
the SLURM configuration parameter KillWait. A time limit of zero requests that no time limit be
imposed. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds",
"days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
|
--mem=<MB>
Specify the real memory required per node in MegaBytes. Default value is DefMemPerNode and the
maximum value is MaxMemPerNode. If configured, both of parameters can be seen using the scontrol
show config command. This parameter would generally be used if whole nodes are allocated to jobs
(SelectType=select/linear). Also see --mem-per-cpu. --mem and --mem-per-cpu are mutually
exclusive.
|
-c, --cpus-per-task=<ncpus>
Advise the SLURM controller that ensuing job steps will require ncpus number of processors per
task. Without this option, the controller will just try to allocate one processor per task.
For instance, consider an application that has 4 tasks, each requiring 3 processors. If our
cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the controller
might give us only 3 nodes. However, by using the --cpus-per-task=3 options, the controller knows
that each task requires 3 processors on the same node, and the controller will grant an allocation
of 4 nodes, one for each of the 4 tasks.
|
-A, --account=<account>
Charge resources used by this job to specified account. The account is an arbitrary string. The
account name may be changed after job submission using the scontrol command.
|