User Tools

Site Tools


gpu_isac

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
gpu_isac [2021/02/10 15:47]
fschoenfeld
gpu_isac [2021/05/08 00:34]
shaikh [Download & Installation]
Line 1: Line 1:
-{{ :gpu_isac:gpu_isac_logo.png?600 |}}+{{  :gpu_isac:gpu_isac_logo.png?600  }}
  
 ===== Overview ===== ===== Overview =====
Line 5: Line 5:
 **ISAC** (//Iterative Stable Alignment and Clustering//) is a **2D classification algorithm**. It sorts a given stack of cryo-EM particles into different classes that share the same view of a target protein. ISAC is based around iterations of alternating equal size k-means clustering and repeated 2D alignment routines. **ISAC** (//Iterative Stable Alignment and Clustering//) is a **2D classification algorithm**. It sorts a given stack of cryo-EM particles into different classes that share the same view of a target protein. ISAC is based around iterations of alternating equal size k-means clustering and repeated 2D alignment routines.
  
-<note> +<note> You can find the details of the ISAC algorithm in [[https://www.cell.com/fulltext/S0969-2126(11)00467-9|this paper]]. To cite ISAC, use the following: ''Yang, Z., Fang, J., Chittuluru, J., Asturias, F. J. and Penczek, P. A. (2012) Iterative stable alignment and clustering of 2D transmission electron microscope images. Structure 20, 237–247.'' </note>
-You can find the details of the ISAC algorithm in [[https://www.cell.com/fulltext/S0969-2126(11)00467-9|this paper]]. To cite ISAC, use the following: +
-''Yang, Z., Fang, J., Chittuluru, J., Asturias, F. J. and Penczek, P. A. (2012) Iterative stable alignment and clustering of 2D transmission electron microscope images. Structure 20, 237–247.'' +
-</note> +
 ===== ISAC versions ===== ===== ISAC versions =====
  
-  * **ISAC** is the initial version as described in the original paper. At this point this implementation is obsolete and has been replaced by ISAC2 and GPU ISAC (see below). +  * **ISAC**  is the initial version as described in the original paper. At this point this implementation is obsolete and has been replaced by ISAC2 and GPU ISAC (see below). 
- +  * **ISAC2**  is an improved version of ISAC and used by default to produce 2D class averages in the **[[http://sphire.mpg.de/wiki/doku.php?id=downloads:sphire_1_3|SPHIRE]]** ([[https://github.com/cryoem/eman2|git]]) software package and the **[[https://github.com/MPI-Dortmund/transphire|TranSPHIRE]]**automated pipeline for processing cryo-EM data. ISAC2 is a CPU-only implementation and was developed to run on a computer cluster. 
-  * **ISAC2** is an improved version of ISAC and used by default to produce 2D class averages in the **[[http://sphire.mpg.de/wiki/doku.php?id=downloads:sphire_1_3|SPHIRE]]** ([[https://github.com/cryoem/eman2|git]]) software package and the **[[https://github.com/MPI-Dortmund/transphire|TranSPHIRE]]** automated pipeline for processing cryo-EM data. ISAC2 is a CPU-only implementation and was developed to run on a computer cluster. +  * **GPU ISAC**  was developed to run ISAC2 on a single workstation by outsourcing its computationally expensive bottleneck calculations to any available GPUs, while simultaneously keeping its MPI-based CPU parallelization otherwise intact. GPU ISAC is provided as an add-on to SPHIRE that can be installed manually (see below).
- +
-  * **GPU ISAC** was developed to run ISAC2 on a single workstation by outsourcing its computationally expensive bottleneck calculations to any available GPUs, while simultaneously keeping its MPI-based CPU parallelization otherwise intact. GPU ISAC is provided as an add-on to SPHIRE that can be installed manually (see below). +
 ===== Download & Installation ===== ===== Download & Installation =====
  
-<note important> +<note important> **Before you start**, please note the following **system requirements**
-**Before you start**, please note the following **system requirements**+
  
 ---- ----
  
-  * **CUDA:** These installation instructions assume that CUDA is already installed on your system. You can confirm this by running ''nvcc <nowiki>--</nowiki>version'' in your terminal; the resulting output should list the version of your installed CUDA compilation tools. +  * **CUDA:**  These installation instructions assume that CUDA is already installed on your system. You can confirm this by running ''nvcc <nowiki>--</nowiki>version''  in your terminal; the resulting output should list the version of your installed CUDA compilation tools. 
- +  * **SPHIRE:**  In order to use GPU ISAC, SPHIRE needs to be installed. You can find the SPHIRE download and installation instructions [[http://sphire.mpg.de/wiki/doku.php?id=downloads:sphire_1_3#download|here]]. You can confirm a working SPHIRE version by running ''which sphire''  in your terminal; the resulting output should give you the path to your SPHIRE installation (the path should indicate a version number of 1.3 or higher).
-  * **SPHIRE:** In order to use GPU ISAC, SPHIRE needs to be installed. You can find the SPHIRE download and installation instructions [[http://sphire.mpg.de/wiki/doku.php?id=downloads:sphire_1_3#download|here]]. You can confirm a working SPHIRE version by running ''which sphire'' in your terminal; the resulting output should give you the path to your SPHIRE installation (the path should indicate a version number of 1.3 or higher).+
 </note> </note>
  
Line 34: Line 25:
 === Download === === Download ===
  
-  * GPU ISAC is currently developed as a manually installed add-on for SPHIRE and distributed as a .zip file that can be found here: {{:gpu_isac:gpu_isac_2.3.2.zip|GPU ISAC (v2.3.2) download link}}.+  * GPU ISAC is currently developed as a manually installed add-on for SPHIRE and distributed as a .zip file that can be found here: {{:gpu_isac:gpu_isac_2.3.4.zip|GPU ISAC (v2.3.4) download link}}.
  
 ---- ----
  
 === Installation === === Installation ===
-Before you start, make sure your SPHIRE environment is activated. + 
-<hidden How to activate your SPHIRE environment:> +Before you start, make sure your SPHIRE environment is activated. <hidden How to activate your SPHIRE environment:> 
-  * During the SPHIRE installation, an Anaconda environment for SPHIRE was created. You can list your available Anaconda environments using: <code>conda env list</code> + 
-  * Look for your SPHIRE environment and activate it using either: <code>conda activate NAME_OF_YOUR_ENVIRONMENT</code> or <code>source activate NAME_OF_YOUR_ENVIRONMENT</code> It will depend on your system and Anaconda installation which one of these you will have to use.+  * During the SPHIRE installation, an Anaconda environment for SPHIRE was created. You can list your available Anaconda environments using: 
 +<code> 
 + 
 +conda env list 
 + 
 +</code> 
 + 
 +  * Look for your SPHIRE environment and activate it using either: 
 + 
 +<code> 
 +conda activate NAME_OF_YOUR_ENVIRONMENT 
 + 
 +</code> 
 + 
 +or 
 + 
 +<code> 
 +source activate NAME_OF_YOUR_ENVIRONMENT 
 + 
 +</code> 
 + 
 +It will depend on your system and Anaconda installation which one of these you will have to use. 
 </hidden> </hidden>
-\\ 
  
 GPU ISAC comes with a handy installation script that can be used as follows: GPU ISAC comes with a handy installation script that can be used as follows:
  
-  - **Extract the archive** to your chosen GPU ISAC installation folder. +  - **Extract the archive**  to your chosen GPU ISAC installation folder. 
-  - **Open a terminal** and navigate to your installation folder. +  - **Open a terminal**  and navigate to your installation folder. 
-  - **Run the installation script**: <code>./install.sh</code>+  - **Run the installation script**: 
 +<code> 
 +./install.sh 
 + 
 +</code>
  
 All done! All done!
 +
 +<note>During installation GPU ISAC links itself to the SPHIRE GUI and can be called from there like any other SPHIRE program!</note>
 +
  
 ===== Running GPU ISAC ===== ===== Running GPU ISAC =====
  
-An example call to use GPU ISAC looks as follows:+When calling GPU ISAC from the terminal, an example call looks as follows:
  
 <code> <code>
-mpirun python /path/to/sxisac2_gpu.py bdb:path/to/stack path/to/output --CTF -–radius=160 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0,1+mpirun python /path/to/sp_isac2_gpu.py bdb:path/to/stack path/to/output --CTF -–radius=160 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0,1 
 </code> </code>
  
Line 65: Line 85:
  
 <code> <code>
-mpirun python /path/to/sxisac2_gpu.py+mpirun python /path/to/sp_isac2_gpu.py
 bdb:path/to/stack bdb:path/to/stack
 path/to/output path/to/output
---CTF +--CTF
 -–radius=160 -–radius=160
 --img_per_grp=100 --img_per_grp=100
 --minimum_grp_size=60 --minimum_grp_size=60
 --gpu_devices=0,1 --gpu_devices=0,1
 +
 </code> </code>
  
-**[ ! ] - Mandatory** parameters in the GPU ISAC call:+**[ ! ] - Mandatory**  parameters in the GPU ISAC call:
  
-  * ''mpirun'' is not a GPU ISAC parameter, but is required to launch GPU ISAC using MPI parallelization (GPU ISAC uses MPI to parallelize CPU computations and MPI/CUDA to distribute and parallelize GPU computations). +  * ''mpirun''  is not a GPU ISAC parameter, but is required to launch GPU ISAC using MPI parallelization (GPU ISAC uses MPI to parallelize CPU computations and MPI/CUDA to distribute and parallelize GPU computations). 
-  * ''/path/to/sxisac2_gpu.py'' is the path to your **sxisac2_gpu.py** file. If you followed these instructions it should be ''your/installation/path/gpu_isac_2.2/bin/sxisac2_gpu.py''+  * ''/path/to/sp_isac2_gpu.py''  is the path to your **sp_isac2_gpu.py**  file. If you followed these instructions it should be ''your/installation/path/gpu_isac_2.2/bin/sp_isac2_gpu.py''
-  * ''path/to/stack'' is the path to your **input .bdb stack**. If you prefer to use an **.hdf** stack, simply remove the ''bdb:'' prefix. +  * ''path/to/stack''  is the path to your **input .bdb stack**. If you prefer to use an **.hdf**  stack, simply remove the ''bdb:''  prefix. 
-  * ''path/to/output'' is the path to your preferred **output directory**. +  * ''path/to/output''  is the path to your preferred **output directory**. 
-  * ''<nowiki>--</nowiki>radius=160'' is the **radius of your target particle** (in pixels) and has to be set accordingly. +  * ''<nowiki>--</nowiki>radius=160''  is the **radius of your target particle**  (in pixels) and has to be set accordingly. 
-  * ''<nowiki>--</nowiki>gpu_devices'' tells GPU ISAC **what GPUs to use** by specifying their system id values.+  * ''<nowiki>--</nowiki>gpu_devices''  tells GPU ISAC **what GPUs to use**  by specifying their system id values. 
 +<hidden What GPUs do I have and what are their system id values?> You can use ''nvidia-smi''  in your terminal to see what GPUs are available on your machine. This also lists their id values and sorts all entries by [[https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities|CUDA compute capability]], where your most powerful GPU has id value 0 and your least powerful GPU has the highest id value:
  
-<hidden What GPUs do I have and what are their system id values?> +{{  :gpu_isac:gpu_isac_nvidia-smi.png  }}
-You can use ''nvidia-smi'' in your terminal to see what GPUs are available on your machineThis also lists their id values and sorts all entries by [[https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities|CUDA compute capability]], where your most powerful GPU has id value 0 and your least powerful GPU has the highest id value:+
  
-{{ :gpu_isac:gpu_isac_nvidia-smi.png |}}+AboveExample output of ''nvidia-smi''. GPU system id values and GPU names marked in red. Among other things, this also lists your current driver version, marked in turquoise</hidden>
  
-Above: Example output of ''nvidia-smi''GPU system id values and GPU names marked in red. Among other things, this also lists your current driver version, marked in turquoise. +**[?] Optional**  parameters recommended to be used when running GPU ISAC:
-</hidden> +
-\\+
  
-**[?] - Optional** parameters recommended to be used when running GPU ISAC: +  * Use the ''<nowiki>--</nowiki>CTF''  flag to **apply phase flipping**  to your particles. 
-  * Use the ''<nowiki>--</nowiki>CTF'' flag to **apply phase flipping** to your particles. +  * Use ''<nowiki>--</nowiki>img_per_grp''  to limit the **maximum size of individual classes**. Empirically, a class size of 100-200 (30-50 for negative stain) particles has been proven successful when dealing with around 100,000 particles. (This may differ for your data set and you can use GPU ISAC to find out; see below.) 
-  * Use ''<nowiki>--</nowiki>img_per_grp'' to limit the **maximum size of individual classes**. Empirically, a class size of 100-200 (30-50 for negative stain) particles has been proven successful when dealing with around 100,000 particles. (This may differ for your data set and you can use GPU ISAC to find out; see below.) +  * Use ''<nowiki>--</nowiki>minimum_grp_size''  to limit the **minimum size of individual classes**. In general, this value should be around 50-60% of your maximum class size.
-  * Use ''<nowiki>--</nowiki>minimum_grp_size'' to limit the **minimum size of individual classes**. In general, this value should be around 50-60% of your maximum class size.+
  
 <note> <note>
-  * An up to date list of **all GPU ISAC parameters** can always be printed by using the ''-h'' parameter (in this case you do not need to specify any other parameters): <code>mpirun python /path/to/sxisac2_gpu.py -h</code> or simply <code>python /path/to/sxisac2_gpu.py -h</code> 
  
-  * The online documentation of **ISAC2 parameters** can be found [[http://sphire.mpg.de/wiki/doku.php?id=pipeline:isac:sxisac2|here]]. +  * An up to date list of **all GPU ISAC parameters**  can always be printed by using the ''-h''  parameter (in this case you do not need to specify any other parameters): 
-   +<code> 
-  * **Additional utilities** that are helpful when using any version of ISAC can be found [[http://sphire.mpg.de/wiki/doku.php?id=pipeline:isac:start|here]].+ 
 +mpirun python /path/to/sp_isac2_gpu.py -h 
 + 
 +</code> 
 + 
 +or simply 
 + 
 +<code> 
 +python /path/to/sp_isac2_gpu.py -h 
 + 
 +</code> 
 + 
 +  * The online documentation of **ISAC2 parameters**  can be found [[http://sphire.mpg.de/wiki/doku.php?id=pipeline:isac:sxisac2|here]]. 
 + 
 +  * **Additional utilities**  that are helpful when using any version of ISAC can be found [[http://sphire.mpg.de/wiki/doku.php?id=pipeline:isac:start|here]]
 +  * More information about **using ISAC for 2D classification**  can also be found in the ISAC chapter of the official [[ftp://ftp.gwdg.de/pub/misc/sphire/sphire_1_3_tutorial/sphire_1_3.pdf|SPHIRE tutorial]] (link to .pdf file).
  
-  * More information about **using ISAC for 2D classification** can also be found in the ISAC chapter of the official [[ftp://ftp.gwdg.de/pub/misc/sphire/sphire_1_3_tutorial/sphire_1_3.pdf|SPHIRE tutorial]] (link to .pdf file). 
 </note> </note>
  
Line 114: Line 145:
 This example is a test run that can be used to confirm GPU ISAC was installed successfully. It is a small stack that contains 64 artificial faces and is already included in the GPU ISAC installation package. You can process it using GPU ISAC as follows: This example is a test run that can be used to confirm GPU ISAC was installed successfully. It is a small stack that contains 64 artificial faces and is already included in the GPU ISAC installation package. You can process it using GPU ISAC as follows:
  
-  - In your terminal, navigate to your GPU ISAC installation folder:<code>cd /gpu/isac/installation/folder</code> +  - In your terminal, navigate to your GPU ISAC installation folder: 
-  - Run GPU ISAC:<code>mpirun python bin/sxisac2_gpu.py 'bdb:examples/isac_dummy_data_64#faces' 'isac_out_test/' --radius=32 --img_per_grp=8 --minimum_grp_size=4 --gpu_devices=0</code>+<code> 
 + 
 +cd /gpu/isac/installation/folder 
 + 
 +</code> 
 + 
 +  - Run GPU ISAC: 
 + 
 +<code> 
 +mpirun python bin/sp_isac2_gpu.py 'bdb:examples/isac_dummy_data_64#faces' 'isac_out_test/' --radius=32 --img_per_grp=8 --minimum_grp_size=4 --gpu_devices=0 
 + 
 +</code>
  
 Note that we don't care about the quality of any produced averages here; this test is used to make sure there are no runtime issues before a more time consuming run is executed. Note that we don't care about the quality of any produced averages here; this test is used to make sure there are no runtime issues before a more time consuming run is executed.
Line 125: Line 167:
 This example uses the [[https://ftp.gwdg.de/pub/misc/sphire/test_dataset/sphire_testdata_movies.tar|SPHIRE tutorial data set]] (link to .tar file) described in the [[ftp://ftp.gwdg.de/pub/misc/sphire/sphire_1_3_tutorial/sphire_1_3.pdf|SPHIRE tutorial]] (link to .pdf file). The data contains about 10,000 particles from 112 micrographs and was originally published here [[https://www.nature.com/articles/nature11987|(Gatsogiannis et al, 2013)]]. This example uses the [[https://ftp.gwdg.de/pub/misc/sphire/test_dataset/sphire_testdata_movies.tar|SPHIRE tutorial data set]] (link to .tar file) described in the [[ftp://ftp.gwdg.de/pub/misc/sphire/sphire_1_3_tutorial/sphire_1_3.pdf|SPHIRE tutorial]] (link to .pdf file). The data contains about 10,000 particles from 112 micrographs and was originally published here [[https://www.nature.com/articles/nature11987|(Gatsogiannis et al, 2013)]].
  
-After downloading the data you'll notice that the extracted folder contains a multitude of subfolders. For the purposes of this example we are only interested in the ''Particles/'' folder that stores the original data as a .bdb file.+After downloading the data you'll notice that the extracted folder contains a multitude of subfolders. For the purposes of this example we are only interested in the ''Particles/''  folder that stores the original data as a .bdb file.
  
 You can process this stack using GPU ISAC as follows: You can process this stack using GPU ISAC as follows:
  
-  - In your terminal, navigate to your GPU ISAC installation folder:<code>cd /gpu/isac/installation/folder</code> +  - In your terminal, navigate to your GPU ISAC installation folder: 
-  - Run GPU ISAC:<code>mpirun python bin/sxisac2_gpu.py 'bdb:/your/path/to/Particles/#stack' 'isac_out_TcdA1' --CTF --radius=145 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0</code> +<code>
-      * Replace ''/your/path/to/Particles/'' with the path to the ''Particles/'' directory you just downloaded. +
-      * Optional: Replace ''<nowiki>--</nowiki>gpu_devices=0'' with ''<nowiki>--</nowiki>gpu_devices=0,1'' if you have two GPUs available (and so on).+
  
-The final averages can then be found in ''isac_out_TcdA1/ordered_class_averages.hdf''. You can look at them using ''e2display.py'' (or any other displaying program of your choice) and should see averages like these:+cd /gpu/isac/installation/folder
  
-{{:gpu_isac:gpu_isac_class_averages.png|}} +</code> 
-Above: 95 class averages produced when processing the above data set using GPU ISAC. The particle stack contains 11,003 particles and the averages were computed within 6 minutes (Intel i9-7020X CPU and 2x GeForce GTX 1080 GPUs).+ 
 +  - Run GPU ISAC: 
 + 
 +<code> 
 +mpirun python bin/sp_isac2_gpu.py 'bdb:/your/path/to/Particles/#stack' 'isac_out_TcdA1' --CTF --radius=145 --img_per_grp=100 --minimum_grp_size=60 --gpu_devices=0 
 + 
 +</code> 
 + 
 +  * Replace ''/your/path/to/Particles/''  with the path to the ''Particles/''  directory you just downloaded. 
 +  * Optional: Replace ''<nowiki>--</nowiki>gpu_devices=0''  with ''<nowiki>--</nowiki>gpu_devices=0,1''  if you have two GPUs available (and so on). 
 + 
 +The final averages can then be found in ''isac_out_TcdA1/ordered_class_averages.hdf''. You can look at them using ''e2display.py''  (or any other displaying program of your choice) and should see averages like these: 
 + 
 +{{:gpu_isac:gpu_isac_class_averages.png}}Above: 95 class averages produced when processing the above data set using GPU ISAC. The particle stack contains 11,003 particles and the averages were computed within 6 minutes (Intel i9-7020X CPU and 2x GeForce GTX 1080 GPUs).
  
 ===== Usage ===== ===== Usage =====
Line 144: Line 197:
  
   * Quickly generate **2D class averages**.   * Quickly generate **2D class averages**.
-  * Quickly identify **suitable parameters** for your data set. +  * Quickly identify **suitable parameters**  for your data set. 
-  * Quickly gauge the **quality of your data** set before spending time on more costly processing steps.+  * Quickly gauge the **quality of your data**  set before spending time on more costly processing steps.
  
-<hidden Well, "suitable parameters" sounds great! How do I get those?> +<hidden Well, "suitable parameters" sounds great! How do I get those?> Clustering cryo-EM data is a difficult problem that involves many different parameters and often it is unclear how these impact the resulting 2D class averages. In GPU ISAC the most relevant parameters to fiddle with are:
-Clustering cryo-EM data is a difficult problem that involves many different parameters and often it is unclear how these impact the resulting 2D class averages. In GPU ISAC the most relevant parameters to fiddle with are:+
  
-  * **Class size:** The class (or cluster) size ''<nowiki>--</nowiki>img_per_grp'' in ISAC determines how many particles are taken together in order to construct a new 2D class average. High values will mean cleaner averages, but might also lump together particles that should be sorted into different classes. If you are using GPU ISAC to screen a set of 20,000 to 40,000 particles, then '100' particles per class are a good starting value. Further, the //minimun// size of each class ''<nowiki>--</nowiki>minimum_grp_size'' should be around 60% of set class size. +  * **Class size:**  The class (or cluster) size ''<nowiki>--</nowiki>img_per_grp''  in ISAC determines how many particles are taken together in order to construct a new 2D class average. High values will mean cleaner averages, but might also lump together particles that should be sorted into different classes. If you are using GPU ISAC to screen a set of 20,000 to 40,000 particles, then '100' particles per class are a good starting value. Further, the //minimun//  size of each class ''<nowiki>--</nowiki>minimum_grp_size''  should be around 60% of set class size. 
-  * **Threshold error:** The ''<nowiki>--</nowiki>thld_err'' parameter determines how similar subsequently produced averages have to be in order to be considered stable enough. A value of ''0.7'' is very stringent, while ''1.4'' is less so, and you should not need a higher value than ''2.4''. +  * **Threshold error:**  The ''<nowiki>--</nowiki>thld_err''  parameter determines how similar subsequently produced averages have to be in order to be considered stable enough. A value of ''0.7''  is very stringent, while ''1.4''  is less so, and you should not need a higher value than ''2.4''
- +Since GPU ISAC processes small stacks of about 10,000 to 20,000 particles fairly quickly, you can try several runs with different values for ''<nowiki>--</nowiki>img_per_group''  and ''<nowiki>--</nowiki>thld_err''  to see which combination gives you the best results. Once you are happy with the results, you can use these parameters for a full-sized run of (GPU) ISAC. Good luck! :) </hidden>
-Since GPU ISAC processes small stacks of about 10,000 to 20,000 particles fairly quickly, you can try several runs with different values for ''<nowiki>--</nowiki>img_per_group'' and ''<nowiki>--</nowiki>thld_err'' to see which combination gives you the best results. +
-Once you are happy with the results, you can use these parameters for a full-sized run of (GPU) ISAC. Good luck! :) +
-</hidden> +
-\\+
  
 ===== GPU ISAC output files ===== ===== GPU ISAC output files =====
Line 162: Line 210:
 GPU ISAC produces a multitude of output files that can be used to analyze the success of running the program, even while it is still ongoing. These include the following: GPU ISAC produces a multitude of output files that can be used to analyze the success of running the program, even while it is still ongoing. These include the following:
  
-  * **Main iteration folders:** As GPU ISAC is running, it performs multiple "main iterations" and "generations" that are stored within the output folder structure. New class averages are produced during every iteration / generation and can be looked at during runtime without having to wait for the overall process to conclude. This can help to **quickly gauge the quality of a data set**. Check ''path/to/output/mainXXX/generationYYY'' for the ''.hdf'' files to that contain any newly produced class averages. +  * **Main iteration folders:**  As GPU ISAC is running, it performs multiple "main iterations" and "generations" that are stored within the output folder structure. New class averages are produced during every iteration / generation and can be looked at during runtime without having to wait for the overall process to conclude. This can help to **quickly gauge the quality of a data set**. Check ''path/to/output/mainXXX/generationYYY''  for the ''.hdf''  files to that contain any newly produced class averages. 
-  * In both the main iteration folders and the base output folder you will find ''processed_images.txt'' files. These contain the indices of all processed particles and can be used to determine how many particles GPU ISAC did account for during classification. +  * In both the main iteration folders and the base output folder you will find ''processed_images.txt''  files. These contain the indices of all processed particles and can be used to determine how many particles GPU ISAC did account for during classification. 
-  * **The final averages** are stored in ''path/to/output/ordered_class_averages.hdf''. +  * **The final averages**  are stored in ''path/to/output/ordered_class_averages.hdf''.
- +
-===== Linking GPU ISAC to the SPHIRE GUI ===== +
- +
-* bla +
 ===== Release notes ===== ===== Release notes =====
  
Line 176: Line 219:
   * The current develpoment goal of GPU ISAC is to run as fast as possible on a single machine. Because of this priority, **GPU ISAC does not yet run on multiple nodes**. This is planned to change as soon as the currently known bottlenecks have all been converted to run on the available GPUs.   * The current develpoment goal of GPU ISAC is to run as fast as possible on a single machine. Because of this priority, **GPU ISAC does not yet run on multiple nodes**. This is planned to change as soon as the currently known bottlenecks have all been converted to run on the available GPUs.
  
-------+----
  
 **Known issues** **Known issues**
  
-  * In some cases **when using CUDA version 11, GPU ISAC receives a kill signal interrupt**. We're investigating the issue but recommend to use a lower version (confirmed working when using CUDA 9 and 10) until it is resolved. You can use ''nvcc <nowiki>--</nowiki>version'' in your terminal to see the CUDA version you are using.+  * In some cases **when using CUDA version 11, GPU ISAC receives a kill signal interrupt**. We're investigating the issue but recommend to use a lower version (confirmed working when using CUDA 9 and 10) until it is resolved. You can use ''nvcc <nowiki>--</nowiki>version''  in your terminal to see the CUDA version you are using. 
 + 
 +---- 
 + 
 +**GPU ISAC v2.3.4** 
 + 
 +  * Updated the installer to automatically link GPU ISAC to the SPHIRE GUI. 
 + 
 +**GPU ISAC v2.3.3**
  
-------+  * Internal changes only.
  
 **GPU ISAC v2.3.1 & v2.3.2 (hotfix releases)** **GPU ISAC v2.3.1 & v2.3.2 (hotfix releases)**
  
   * Changed data handling, which results in a massive reduction in overall memory usage and an increased pre-alignment performance.   * Changed data handling, which results in a massive reduction in overall memory usage and an increased pre-alignment performance.
-  * Fixed use of ''-h'' parameter to display the help.+  * Fixed use of ''-h''  parameter to display the help.
   * Fixed error in the pre-alignment progress bar that made it seem as if it did not run to completion.   * Fixed error in the pre-alignment progress bar that made it seem as if it did not run to completion.
   * Minimum class size is now automatically set to 60% of the full class size, if no minimum class size was specified by the user.   * Minimum class size is now automatically set to 60% of the full class size, if no minimum class size was specified by the user.
gpu_isac.txt · Last modified: 2021/05/08 00:41 by shaikh