-t, --time=<time>
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the
partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The
default time limit is the partition's time limit. When the time limit is reached, the each task
in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by
the SLURM configuration parameter KillWait. A time limit of zero requests that no time limit be
imposed. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds",
"days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
|
-n, --ntasks=<number>
salloc does not launch tasks, it requests an allocation of resources and executed some command.
This option advises the SLURM controller that job steps run within this allocation will launch a
maximum of number tasks and sufficient resources are allocated to accomplish this. The default is
one task per node, but note that the --cpus-per-task option will change this default.
|
--mem-per-cpu=<MB>
Mimimum memory required per allocated CPU in MegaBytes. Default value is DefMemPerCPU and the
maximum value is MaxMemPerCPU (see exception below). If configured, both of parameters can be seen
using the scontrol show config command. Note that if the job's --mem-per-cpu value exceeds the
configured MaxMemPerCPU, then the user's limit will be treated as a memory limit per task;
--mem-per-cpu will be reduced to a value no larger than MaxMemPerCPU; --cpus-per-task will be set
and value of --cpus-per-task multiplied by the new --mem-per-cpu value will equal the original
--mem-per-cpu value specified by the user. This parameter would generally be used if individual
processors are allocated to jobs (SelectType=select/cons_res). Also see --mem. --mem and
--mem-per-cpu are mutually exclusive.
|