srun(1) --ntasks=$SLURM_NNODES --ntasks-per-node=1 unzip archive.zip -d $SLURM_TMPDIR
Run parallel jobs
-n, --ntasks=<number>
       Specify the number of tasks to run. Request that srun allocate resources for  ntasks  tasks.   The
       default is one task per node, but note that the --cpus-per-task option will change this default.
--ntasks-per-node=<ntasks>
       Request  the  maximum  ntasks  be invoked on each node.  Meant to be used with the --nodes option.
       This is related to --cpus-per-task=ncpus, but does not require knowledge of the actual  number  of
       cpus on each node.  In some cases, it is more convenient to be able to request that no more than a
       specific number of tasks be invoked on each node.  Examples of this include  submitting  a  hybrid
       MPI/OpenMP  app  where only one MPI "task/rank" should be assigned to each node while allowing the
       OpenMP portion to utilize all of the parallelism present in  the  node,  or  submitting  a  single
       setup/cleanup/monitoring job to each node of a pre-existing allocation as one step in a larger job
       script.
-d, --dependency=<dependency_list>
       Defer the start of this job until  the  specified  dependencies  have  been  satisfied  completed.
       <dependency_list>  is  of  the  form <type:job_id[:job_id][,type:job_id[:job_id]]>.  Many jobs can
       share the same dependency and these jobs may even belong to different  users. The   value  may  be
       changed after job submission using the scontrol command.
source manpages: srun