Submit a cluster job from the GUI

Most SPHIRE jobs require heavy computing and are most suited for a cluster environment, although all SPHIRE pipeline commands now support a workstation (or a single node system) since Beta Patch 1 release (sphire_beta_20170602).

Since every cluster environment is a little bit different and everybody has a particular preference for job schedulers we cannot provide support for all the possible cluster configurations. Nevertheless, SPHIRE's GUI can create a submission script for you, given that you provide simple template script. Currently, these are the variables than can be parsed by the GUI:

Variable Description
XXX_SXMPI_NPROC_XXX Defines the total number of cores to be used.
XXX_SXMPI_JOB_NAME_XXX Defines the name of the job. This can be used to name the stdout and stderr and will define the name of the submission script created.
XXX_SXCMD_LINE_XXX This will be replaced by the actual command to run SPHIRE.

For instance, if you want to submit a 2D clustering job (ISAC) from the GUI, you should configure the MPI section of like below: SPHIRE GUI

This will create a file called with the correct setup and submit it to the cluster.

Example Script

Example scripts can be downloaded here: QSUB EXAMPLES.

Below you can find an example of the kind of submission templates that we would use in our local cluster. Here we use the Son of Grid Engine (SGE) as scheduler.

set echo on

#$ -pe mpi_fillup XXX_SXMPI_NPROC_XXX
#$ -cwd

# Sets up the environment for SPHIRE. 
source /work/software/Sphire/EMAN2/eman2.bashrc
MPIRUN=$(which mpirun)

#Creates a file with the nodes used for the calculation. It can be given to mpirun, but 
#with the current setup it is not necessary.
set hosts = `cat $PE_HOSTFILE | cut -f 1 -d . | sort | fmt -w 30 | sed 's/ /,/g'`
cat $PE_HOSTFILE | awk '{print $1}'  > hostfile.$JOB_ID


If you use this script and define the variable XXX_SXMPI_JOB_NAME_XXX as above (sxisac_test), then the stderr and stdout will be written to files called sxisac_test.e and sxisac_test.o respectively.

Please contact your system administrator in order to create an appropriate template in case your cluster uses a different job scheduler.