-t, --time=<time>
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the
partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The
default time limit is the partition's time limit. When the time limit is reached, the each task
in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by
the SLURM configuration parameter KillWait. A time limit of zero requests that no time limit be
imposed. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds",
"days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
|
-c, --cpus-per-task=<ncpus>
Advise the SLURM controller that ensuing job steps will require ncpus number of processors per
task. Without this option, the controller will just try to allocate one processor per task.
For instance, consider an application that has 4 tasks, each requiring 3 processors. If our
cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the controller
might give us only 3 nodes. However, by using the --cpus-per-task=3 options, the controller knows
that each task requires 3 processors on the same node, and the controller will grant an allocation
of 4 nodes, one for each of the 4 tasks.
|