Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
downloads:cryolo_1 [2019/03/17 20:04]
twagner [Run it on the CPU]
downloads:cryolo_1 [2019/05/17 08:35] (current)
twagner [crYOLO]
Line 30: Line 30:
  
 ====crYOLO==== ====crYOLO====
-Version: 1.3.0+Version: 1.3.6
  
-Uploaded:  ​14March 2019+Uploaded:  ​17May 2019
  
-[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO_V1_3_0/​cryolo-1.3.0.tar.gz|DOWNLOAD]]+[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO_V1_3_6/​cryolo-1.3.6.tar.gz|DOWNLOAD ​GPU VERSION]] 
 + 
 +[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO_V1_3_6/​cryolo-1.3.6.dev0.tar.gz|DOWNLOAD CPU VERSION]]
  
 ====crYOLO boxmanager==== ====crYOLO boxmanager====
  
-Version: 1.2.0+Version: 1.2.2
  
-Uploaded: ​14March 2019+Uploaded: ​03May 2019
  
-[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO_BM_V1_2/​cryoloBM-1.2.tar.gz|DOWNLOAD]]+[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO_BM_V1_2_2/cryoloBM-1.2.2.tar.gz|DOWNLOAD]]
  
 [{{ :​downloads:​cryolophosaurusdb.jpg?​150|**crYOLO Phosauraus**Net'​s eponym}}] [{{ :​downloads:​cryolophosaurusdb.jpg?​150|**crYOLO Phosauraus**Net'​s eponym}}]
Line 50: Line 52:
  
 === For cryo images === === For cryo images ===
-Number of datasets: ​32 real, 10 simulated, 10 particle free datasets on various grids with contaminations+Number of datasets: ​38 real, 10 simulated, 10 particle free datasets on various grids with contaminations
  
-Uploaded: ​14March 2019+Uploaded: ​17May 2019
  
-[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO-GENERAL-MODELS/​gmodel_phosnet_20190314.h5+[[ftp://​ftp.gwdg.de/​pub/​misc/​sphire/​crYOLO-GENERAL-MODELS/​gmodel_phosnet_20190516.h5
 |DOWNLOAD]] ​ |DOWNLOAD]] ​
  
Line 90: Line 92:
   * <​del>​Issue 13: After picking it can happen that some of the boxes are not fully immersed in the image. Will be fixed in 1.2.4.</​del>​   * <​del>​Issue 13: After picking it can happen that some of the boxes are not fully immersed in the image. Will be fixed in 1.2.4.</​del>​
   * <​del>​Issue 14: Parallelization in filament mode is broken. Will be fixed in 1.2.4.</​del>​   * <​del>​Issue 14: Parallelization in filament mode is broken. Will be fixed in 1.2.4.</​del>​
-  * Issue 15: If the --gpu_fraction is used, crYOLO always uses GPU 0. Will be fixed in 1.3.1. ​+  * <del>Issue 15: If the %%--%%gpu_fraction is used, crYOLO always uses GPU 0. Will be fixed in 1.3.1.</​del>​  
 +  * <​del>​Issue 16: %%--%%gpu_fraction only works for prediction, not for training. Will be fixed in 1.3.2.</​del>​ 
 +  * Issue 17: On the fly filtering (%%--%%otf) is slower than using it not, as the filtering is not parallelized in this case.  
 +  * <​del>​Issue 18: Prediction is broken in 1.3.2. It removes all particles as it claim they are not fully immersed in the image.</​del>​ 
 +  * <​del>​Issue 19: Filtering does not work if target image directory is absolute path.</​del>​ 
 +  * <​del>​Issue 20: crYOLO 1.3.4 has a normalization bug. During training the images are normalized seperately, but during prediction is done batch wise. Workaround: Use -pbs 1 during prediction. It will be fixed in 1.3.5.</​del>​ 
 +  * <​del>​Issue 21: The search range for filament tracing is too low for many datasets. To check if you are affected: Use your trained model and pick without the filament options. Check if your filaments a nicely picked (many consecutive boxes on a filament). In the next version, the search range will be increased and added as an optional parameter.</​del>​ 
 +  * <​del>​Issue 22: If absolute paths are used in the field "​train_image"​ in your configuration file, filtering is skipped.</​del>​
  
  
Line 109: Line 118:
 **Install crYOLO!** **Install crYOLO!**
  
-The following instructions assume that pip and [[https://​conda.io/​docs/​user-guide/​install/​index.html|anaconda]] or [[https://​conda.io/​miniconda.html|miniconda]] are available.+The following instructions assume that pip and [[https://​conda.io/​projects/​conda/​en/​latest/​user-guide/​install/​index.html|anaconda]] or [[https://docs.conda.io/en/latest/​miniconda.html|miniconda]] are available.
 In case you have a old cryolo environment installed, first remove the old one with: In case you have a old cryolo environment installed, first remove the old one with:
 <​code>​ <​code>​
Line 126: Line 135:
 Install crYOLO: Install crYOLO:
 <​code>​ <​code>​
-pip install numpy+conda install numpy==1.14.5
 pip install cryolo-X.Y.Z.tar.gz ​ pip install cryolo-X.Y.Z.tar.gz ​
 pip install cryoloBM-X.Y.Z.tar.gz pip install cryoloBM-X.Y.Z.tar.gz
 </​code>​ </​code>​
  
-That's it!+**That's it!** 
 + 
 +You might want to check if everything is running as expected. Here is a reference example: 
 + 
 +[[http://​sphire.mpg.de/​wiki/​doku.php?​id=cryolo_reference_example#​reference_setup|Reference example with TcdA1]]
  
 ===== Run it on the CPU ==== ===== Run it on the CPU ====
  
-There is also a way to run crYOLO on CPU. This is especially usefull when you would like to apply the generalized model and don't have a NVIDIA GPU. +There is also a way to run crYOLO on CPU. To use it, just install the CPU version as provided in the download section. This is especially usefull when you would like to apply the generalized model and don't have a NVIDIA GPU. 
  
 Picking with crYOLO is also quite fast on the CPU. On my local machine (Intel i9) it takes roughly 1 second per micrograph and on our low-performance notebooks (Intel i3) 4 seconds. ​ Picking with crYOLO is also quite fast on the CPU. On my local machine (Intel i9) it takes roughly 1 second per micrograph and on our low-performance notebooks (Intel i3) 4 seconds. ​
  
 Training crYOLO is much more computational expensive. Training a model with 14 micrographs from scratch on my local machine take 34 minutes per epoch on the CPU. Given that you often need 25 epochs until convergence it is a task to do overnight (~ 12 hours). However, you might want to try [[pipeline:​window:​cryolo##​picking_particles_-_using_the_general_model_refined_for_your_data|refining the general model]], which takes 12 minutes per epoch (~ 5 hours). Training crYOLO is much more computational expensive. Training a model with 14 micrographs from scratch on my local machine take 34 minutes per epoch on the CPU. Given that you often need 25 epochs until convergence it is a task to do overnight (~ 12 hours). However, you might want to try [[pipeline:​window:​cryolo##​picking_particles_-_using_the_general_model_refined_for_your_data|refining the general model]], which takes 12 minutes per epoch (~ 5 hours).
- 
-**Here is how you prepare your crYOLO setup for using it on the CPU:** 
- 
-After you followed the crYOLO installation instructions just replace tensorflow-gpu by tensorflow: 
-<​code>​ 
-pip uninstall tensorflow-gpu 
-pip install tensorflow==1.10.1 
-</​code>​ 
- 
-Now crYOLO should work on the CPU as well! 
  
 ====== Start picking! ====== ====== Start picking! ======
Line 155: Line 158:
 Use the **__''​[[pipeline:​window:​cryolo|step-by-step tutorial]]''​__** to get started! Use the **__''​[[pipeline:​window:​cryolo|step-by-step tutorial]]''​__** to get started!
  
-====== Change log =====+====== Change log ======
  
 ====crYOLO==== ====crYOLO====
 +
 +**crYOLO 1.3.6:**
 +  * Changed filament search radius factor from 0.8 to 1.41 (this fixed issue 21)
 +  * Add search radius factor as [[pipeline:​window:​cryolo#​advanced_parameters|advanced parameter]] (-sr) during prediction in filament mode
 +  * Improved error message in case of corrupted config file
 +  * Fixed issue 22: If absolute paths are used in the field “train_image” in your configuration file, filtering is skipped.
 +
 +**crYOLO 1.3.5:**
 +  * Fixed issue 20: During training the images are normalized separately, but during prediction is done batch wise. The lead to confusing results: some micrographs were perfectly picked, some totally unreasonable,​ even with the same defocus. This bug only affects the picking, already trained models can still be used.
 +  * Remove unnecessary dependencies
 +  * Add %%__%%version%%__%% to %%__%%init%%__%%.py for easy access to package version.
 +
 +**crYOLO 1.3.4:**
 +  * Support for SPHIRE 1.2
 +  * Changed the minimum threshold for cbox files from 0.01 to 0.1. Much faster in many cases but still low enough. If -t is lower than 0.1, the new threshold is used as minimum.
 +  * Installation now checks if python 3 is used.
 +  * Fix issue 19: Filtering does not work if target image directory is absolute path.
 +  * Fix crash when %%--%%otf was specified but filtering was not specified in the config file
 +
 +**crYOLO 1.3.3:**
 +  * Fix issue 18: Prediction is broken in 1.3.2. It removes all particles as it claim they are not fully immersed in the image.
 +
 +**crYOLO 1.3.2:**
 +  * Speedup prediction: Vectorized some parts of the code and optimized the creation of the cbox files. 30% speed up picking / 15% faster training compared to 1.3.1/​1.3.0. ​
 +  * Bug fix in merging of filaments that sometimes throw "​IndexError:​ list index out of range"​. (Thanks to Alexander Belyy)
 +  * Fix in cryolo_evaluation:​ If the validation data is specified with -b instead of runfiles, all datasets with only one box file were ignored.
 +  * Change library requirement to PILLOW version 6.0.0
 +  * Fix issue 16:  %%--%%gpu_fraction only works for prediction, not for training.
 +
 +**crYOLO 1.3.1:**
 +  * Fix Issue 15: -g was ignored when --gpu_fraction was used.
  
 **crYOLO 1.3.0:** **crYOLO 1.3.0:**
Line 265: Line 299:
  
 ====crYOLO Boxmanager==== ====crYOLO Boxmanager====
 +**crYOLO Boxmanager Version 1.2.2:**
 +  * Makes sure that the correct version of MatplotLib is used.
 +
 +**crYOLO Boxmanager Version 1.2.1:**
 +   * Press "​h"​ for hiding the boxes
 +   * Fix for loading different box sets with different colors for the case that on of the box sets are cbox files.
 +
 **crYOLO Boxmanager Version 1.2:​** ​ **crYOLO Boxmanager Version 1.2:​** ​
   * Add interactive threshold selection using cbox files   * Add interactive threshold selection using cbox files
Line 294: Line 335:
  
 ==== General PhosaurusNet model ==== ==== General PhosaurusNet model ====
 +**Version 20190516:​**:​
 +  * Added four more inhouse datasets
 +  * Added SNRNP (Thanks to Clement Charenton) ​
 +
 **Version 20190315:​**:​ **Version 20190315:​**:​
   * Added KLH   * Added KLH