User Tools

Site Tools


gpu_isac

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gpu_isac [2021/02/19 16:55]
fschoenfeld
gpu_isac [2021/05/08 00:41]
shaikh
Line 25: Line 25:
 === Download === === Download ===
  
-  * GPU ISAC is currently developed as a manually installed add-on for SPHIRE and distributed as a .zip file that can be found here: {{:gpu_isac:gpu_isac_2.3.2.zip|GPU ISAC (v2.3.2) download link}}.+  * GPU ISAC is currently developed as a manually installed add-on for SPHIRE and distributed as a .zip file that can be found here: {{:gpu_isac:gpu_isac_2.3.4.zip|GPU ISAC (v2.3.4) download link}}.
  
 ---- ----
Line 63: Line 63:
   - **Open a terminal**  and navigate to your installation folder.   - **Open a terminal**  and navigate to your installation folder.
   - **Run the installation script**:   - **Run the installation script**:
- 
 <code> <code>
 ./install.sh ./install.sh
Line 70: Line 69:
  
 All done! All done!
 +
 +<note>During installation GPU ISAC links itself to the SPHIRE GUI and can be called from there like any other SPHIRE program!</note>
 +
  
 ===== Running GPU ISAC ===== ===== Running GPU ISAC =====
  
-An example call to use GPU ISAC looks as follows:+When calling GPU ISAC from the terminal, an example call looks as follows:
  
 <code> <code>
-mpirun python /path/to/sxisac2_gpu.py bdb:path/to/stack path/to/output --CTF -–radius=160 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0,1+mpirun python /path/to/sp_isac2_gpu.py bdb:path/to/stack path/to/output --CTF -–radius=160 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0,1
  
 </code> </code>
Line 83: Line 85:
  
 <code> <code>
-mpirun python /path/to/sxisac2_gpu.py+mpirun python /path/to/sp_isac2_gpu.py
 bdb:path/to/stack bdb:path/to/stack
 path/to/output path/to/output
Line 97: Line 99:
  
   * ''mpirun''  is not a GPU ISAC parameter, but is required to launch GPU ISAC using MPI parallelization (GPU ISAC uses MPI to parallelize CPU computations and MPI/CUDA to distribute and parallelize GPU computations).   * ''mpirun''  is not a GPU ISAC parameter, but is required to launch GPU ISAC using MPI parallelization (GPU ISAC uses MPI to parallelize CPU computations and MPI/CUDA to distribute and parallelize GPU computations).
-  * ''/path/to/sxisac2_gpu.py''  is the path to your **sxisac2_gpu.py**  file. If you followed these instructions it should be ''your/installation/path/gpu_isac_2.2/bin/sxisac2_gpu.py''.+  * ''/path/to/sp_isac2_gpu.py''  is the path to your **sp_isac2_gpu.py**  file. If you followed these instructions it should be ''your/installation/path/gpu_isac_2.2/bin/sp_isac2_gpu.py''.
   * ''path/to/stack''  is the path to your **input .bdb stack**. If you prefer to use an **.hdf**  stack, simply remove the ''bdb:''  prefix.   * ''path/to/stack''  is the path to your **input .bdb stack**. If you prefer to use an **.hdf**  stack, simply remove the ''bdb:''  prefix.
   * ''path/to/output''  is the path to your preferred **output directory**.   * ''path/to/output''  is the path to your preferred **output directory**.
Line 111: Line 113:
  
   * Use the ''<nowiki>--</nowiki>CTF''  flag to **apply phase flipping**  to your particles.   * Use the ''<nowiki>--</nowiki>CTF''  flag to **apply phase flipping**  to your particles.
 +  * Use the ''<nowiki>--</nowiki>VPP''  flag with phase plate data. This flag may also be useful for non-phase-plate data, such as membrane proteins in membranes, or generally cases where low-resolution data may dominate the alignment. The ''<nowiki>--</nowiki>VPP'' option divides by the 1D rotational power spectrum of each image, or in other words "whitens" the Fourier data.
   * Use ''<nowiki>--</nowiki>img_per_grp''  to limit the **maximum size of individual classes**. Empirically, a class size of 100-200 (30-50 for negative stain) particles has been proven successful when dealing with around 100,000 particles. (This may differ for your data set and you can use GPU ISAC to find out; see below.)   * Use ''<nowiki>--</nowiki>img_per_grp''  to limit the **maximum size of individual classes**. Empirically, a class size of 100-200 (30-50 for negative stain) particles has been proven successful when dealing with around 100,000 particles. (This may differ for your data set and you can use GPU ISAC to find out; see below.)
   * Use ''<nowiki>--</nowiki>minimum_grp_size''  to limit the **minimum size of individual classes**. In general, this value should be around 50-60% of your maximum class size.   * Use ''<nowiki>--</nowiki>minimum_grp_size''  to limit the **minimum size of individual classes**. In general, this value should be around 50-60% of your maximum class size.
Line 119: Line 122:
 <code> <code>
  
-mpirun python /path/to/sxisac2_gpu.py -h+mpirun python /path/to/sp_isac2_gpu.py -h
  
 </code> </code>
Line 126: Line 129:
  
 <code> <code>
-python /path/to/sxisac2_gpu.py -h+python /path/to/sp_isac2_gpu.py -h
  
 </code> </code>
Line 179: Line 182:
  
 <code> <code>
-mpirun python bin/sxisac2_gpu.py 'bdb:/your/path/to/Particles/#stack' 'isac_out_TcdA1' --CTF --radius=145 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0+mpirun python bin/sp_isac2_gpu.py 'bdb:/your/path/to/Particles/#stack' 'isac_out_TcdA1' --CTF --radius=145 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0
  
 </code> </code>
  
-      * Replace ''/your/path/to/Particles/''  with the path to the ''Particles/''  directory you just downloaded. +  * Replace ''/your/path/to/Particles/''  with the path to the ''Particles/''  directory you just downloaded. 
-      * Optional: Replace ''<nowiki>--</nowiki>gpu_devices=0''  with ''<nowiki>--</nowiki>gpu_devices=0,1''  if you have two GPUs available (and so on).+  * Optional: Replace ''<nowiki>--</nowiki>gpu_devices=0''  with ''<nowiki>--</nowiki>gpu_devices=0,1''  if you have two GPUs available (and so on).
  
 The final averages can then be found in ''isac_out_TcdA1/ordered_class_averages.hdf''. You can look at them using ''e2display.py''  (or any other displaying program of your choice) and should see averages like these: The final averages can then be found in ''isac_out_TcdA1/ordered_class_averages.hdf''. You can look at them using ''e2display.py''  (or any other displaying program of your choice) and should see averages like these:
Line 224: Line 227:
  
 ---- ----
 +
 +**GPU ISAC v2.3.4**
 +
 +  * Updated the installer to automatically link GPU ISAC to the SPHIRE GUI.
 +
 +**GPU ISAC v2.3.3**
 +
 +  * Internal changes only.
  
 **GPU ISAC v2.3.1 & v2.3.2 (hotfix releases)** **GPU ISAC v2.3.1 & v2.3.2 (hotfix releases)**
gpu_isac.txt · Last modified: 2021/05/08 00:41 by shaikh