slurm submit python job

Solutions on MaxInterview for slurm submit python job by the best coders in the world

showing results for - "slurm submit python job"
Paula
09 Jul 2016
1#!/bin/bash -l
2
3##############################
4#       Job blueprint        #
5##############################
6
7# Give your job a name, so you can recognize it in the queue overview
8#SBATCH --job-name=example
9
10# Define, how many nodes you need. Here, we ask for 1 node.
11# Each node has 16 or 20 CPU cores.
12#SBATCH --nodes=1
13# You can further define the number of tasks with --ntasks-per-*
14# See "man sbatch" for details. e.g. --ntasks=4 will ask for 4 cpus.
15
16# Define, how long the job will run in real time. This is a hard cap meaning
17# that if the job runs longer than what is written here, it will be
18# force-stopped by the server. If you make the expected time too long, it will
19# take longer for the job to start. Here, we say the job will take 5 minutes.
20#              d-hh:mm:ss
21#SBATCH --time=0-00:05:00
22
23# Define the partition on which the job shall run. May be omitted.
24#SBATCH --partition normal
25
26# How much memory you need.
27# --mem will define memory per node and
28# --mem-per-cpu will define memory per CPU/core. Choose one of those.
29#SBATCH --mem-per-cpu=1500MB
30##SBATCH --mem=5GB    # this one is not in effect, due to the double hash
31
32# Turn on mail notification. There are many possible self-explaining values:
33# NONE, BEGIN, END, FAIL, ALL (including all aforementioned)
34# For more values, check "man sbatch"
35#SBATCH --mail-type=END,FAIL
36
37# You may not place any commands before the last SBATCH directive
38
39# Define and create a unique scratch directory for this job
40SCRATCH_DIRECTORY=/global/work/${USER}/${SLURM_JOBID}.stallo-adm.uit.no
41mkdir -p ${SCRATCH_DIRECTORY}
42cd ${SCRATCH_DIRECTORY}
43
44# You can copy everything you need to the scratch directory
45# ${SLURM_SUBMIT_DIR} points to the path where this script was submitted from
46cp ${SLURM_SUBMIT_DIR}/myfiles*.txt ${SCRATCH_DIRECTORY}
47
48# This is where the actual work is done. In this case, the script only waits.
49# The time command is optional, but it may give you a hint on how long the
50# command worked
51time sleep 10
52#sleep 10
53
54# After the job is done we copy our output back to $SLURM_SUBMIT_DIR
55cp ${SCRATCH_DIRECTORY}/my_output ${SLURM_SUBMIT_DIR}
56
57# In addition to the copied files, you will also find a file called
58# slurm-1234.out in the submit directory. This file will contain all output that
59# was produced during runtime, i.e. stdout and stderr.
60
61# After everything is saved to the home directory, delete the work directory to
62# save space on /global/work
63cd ${SLURM_SUBMIT_DIR}
64rm -rf ${SCRATCH_DIRECTORY}
65
66# Finish the script
67exit 0
68