ISAC (Iterative Stable Alignment and Clustering) is a 2D classification algorithm to sort cryo-EM particles into classes depicting the same view of a target protein. It is based around iterations of alternating equal size k-means clustering and repeated 2D alignment routines.
Yang, Z., Fang, J., Chittuluru, J., Asturias, F. J. and Penczek, P. A. (2012) Iterative stable alignment and clustering of 2D transmission electron microscope images. Structure 20, 237–247.
ISAC2 is an improved version of ISAC, and the default tool to produce 2D class averages in the SPHIRE (git) software package and the TranSPHIRE automated pipeline for processing cryo-EM data. ISAC2 is a CPU-only implementation that is usually run on a computer cluster.
GPU ISAC is designed to run ISAC2 on a single workstation by outsourcing its computationally expensive calculations to any available GPUs.
Note: To confirm a working version of SPHIRE 1.3 (or later), use which sphire
in your terminal. It should return to you the path to your SPHIRE installation.
tar -xf GPU_ISAC_CHIMERA.tar
echo $PATH
Confirm your PATH
variable contains the path to your cuda/bin folder.
echo $LD_LIBRARY_PATH
Confirm your LD_LIBRARY_PATH
variable contain the path to your cuda/lib64 folder.
export PATH=/path/to/cuda/bin:${PATH} export LD_LIBRARY_PATH=/path/to/cuda/lib64:${LD_LIBRARY_PATH}
Where path/to/cuda/bin
and path/to/cuda/lib64
need to be replaced with the real paths to the respective folders. If you do not know where to find them, by default they should be located in /usr/local/cuda
.
cd vChimera/cuda
Note: This assumes you did not change directories after unpacking the .tar archive.
nvcc gpu_aln_common.cu gpu_aln_noref.cu -o gpu_aln_pack.so -shared -Xcompiler -fPIC -lcufft -std=c++11
cd ../eman2/sparx/libpy
sed -i.bkp "s|/home/schoenf/work/code/cuISAC/cuda|$(realpath ../../../cuda)|g" applications.py
sed -i.bkp2 's|statistics.sum_oe( data, "a", CTF, EMData(), myid=myid|statistics.sum_oe( data, "a", CTF, EMData()|g' applications.py
cd ../bin
Note: We are now in the /eman2/sparx/bin folder of your GPU ISAC installation folder.
ln -rs ../libpy/* .
Note: Don't forget the dot at the end!
sed -i.bkp "s|/home/schoenf/applications/sphire/v1.1/envs/sphire_1.3/bin|$(dirname $(which sphire))|g" sxisac2_gpu.py
sed -i.bkp2 "s/^\(.*options, args.*\)$/\1\n os.environ['CUDA_VISIBLE_DEVICES'] = options.gpu_devices\n options.gpu_devices = ','.join(map(str, range(len(options.gpu_devices.split(',')))))/g" sxisac2_gpu.py
sed -i.bkp3 "s/output_text = \"\n/output_text = \"/g" sxisac2_gpu.py
An example call to use GPU ISAC looks as follows:
mpirun -np 6 /path/to/sxisac2_gpu.py bdb:path/to/stack out_dir --CTF -–radius=160 --target_radius=29 --target_nx=76 --img_per_grp=100 --minimum_grp_size=60 --thld_err=0.7 --center_method=0 --gpu_devices=0,1
More readable:
mpirun -np 6 /path/to/sxisac2_gpu.py bdb:path/to/stack out_dir --CTF -–radius=160 --target_radius=29 --target_nx=76 --img_per_grp=100 --minimum_grp_size=60 --thld_err=0.7 --center_method=0 --gpu_devices=0,1
[ ! ] - Mandatory parameters in the GPU ISAC call:
/path/to/sxisac2_gpu.py
with the path to your sxisac2_gpu.py file.path/to/stack
with the path to your input .bdb stack. If you are using an .hdf stack, you need to remove the bdb:
prefix.out_dir
with the path to your preferred output directory.--radius=160
to the radius of your particle (in pixels).[?] - Optional parameters in the GPU ISAC call:
mpirun –np 6
the number can be set to the number of your CPU processors (e.g., if you have a quad core CPU, you would use 4 here). --gpu_devices
you can set what GPUs to use. This example uses two GPUs with id values 0 and 1, respectively. You can check the id values of your available GPUs by executing nvidia-smi
in your terminal (GPUs are sorted by capability, with 0 being your strongest GPU).--img_per_grp
to limit the maximum size of individual classes. Empirically, a class size of 100-200 (30-50 for negative stain) particles has been proven successful when dealing with around 100,000 particles.--minimum_grp_size
to limit the minimum size of individual classes. In general, this value should be around 50-60% of your maximum class size.