User Tools

Site Tools


pipeline:window:cryolo

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
pipeline:window:cryolo [2019/09/13 22:35]
twagner [Data preparation]
pipeline:window:cryolo [2019/09/17 13:40]
twagner [Advanced parameters]
Line 17: Line 17:
  
 <note> <note>
 +
 You can find more technical details in our paper: You can find more technical details in our paper:
  
Line 38: Line 39:
 ===== Tutorials ===== ===== Tutorials =====
  
-Depending what you want to do, you can follow one of these Tutorials:+Depending what you want to do, you can follow one of these self-contained Tutorials:
  
   - I would like to train a model from scratch for picking my particles   - I would like to train a model from scratch for picking my particles
Line 44: Line 45:
   - I would like to refine a general model for my particles.   - I would like to refine a general model for my particles.
  
-The **first and the second tutorial** are the most common use cases and well tested. The **third tutorial** is still experimental but might give you better results in less time or less training data. +The **first and the second tutorial** are the most common use cases and well tested. The **third tutorial** is still experimental but might give you better results in less time and less training data. 
  
  
  
 ===== Picking particles - Using a model trained for your data ===== ===== Picking particles - Using a model trained for your data =====
 +This tutorial explains you how to train a model specific for you dataset.
  
- 
-==== Data preparation ==== 
 If you followed the installation instructions, you now have to activate the cryolo virtual environment with If you followed the installation instructions, you now have to activate the cryolo virtual environment with
  
Line 57: Line 57:
 source activate cryolo source activate cryolo
 </code> </code>
 +==== Data preparation ====
 +{{page>pipeline:window:cryolo:data_preparation}}
  
-In the following I will assume that your image data is in the folder ''full_data''.+==== Start crYOLO ==== 
 +{{page>pipeline:window:cryolo:start_cryolo}}
  
-The next step is to create training data. To do so, we have to pick single particles manually in several micrographs. Ideally, the micrographs are picked to completion. [[:cryolo_picking_unlabeled|However, it is not necessary to pick all particles. crYOLO will still converge if you miss some (or even many).]] +==== Configuration ==== 
-One may ask how many micrographs have to be picked? It depends! Typically 10 micrographs are a good start. However, that number may increase / decrease due to several factors: +{{page>pipeline:window:cryolo:configuration}}
-  * A very heterogenous background could make it necessary to pick more micrographs. +
-  * When you refine a general model, you might need to pick less micrographs. +
-  * If your micrograph is only sparsely decorated, you may need to pick more micrographs.+
  
-We recommend that you start with 10 micrographs, then autopick your data, check the results and finally decide whether to add more micrographs to your training set+<html> 
 +<div style="background-color: #cfc ; padding: 10px; border: 1px solid green;">  
 +<b>You can now press the Start button to create you configuration file.</b> 
 +</div> 
 +</html>
  
-{{:pipeline:window:box_manager.png?direct&400 |}} 
-To create your training data, crYOLO is shipped with a tool called "boxmanager". However, you can also use tools like e2boxer to create your training data. 
  
-Start the box manager with the following command: +{{page>pipeline:window:cryolo:configuration_cmdl_normal}}
-<code> +
-cryolo_boxmanager.py +
-</code>+
  
-Now press //File -> Open image folder// and the select the ''full_data'' directory. The first image should pop up. You can navigate in the directory tree through the images. Here is how to pick particles: +==== Training ====
  
-  * LEFT MOUSE BUTTONPlace a box +{{page>pipeline:window:cryolo:training}} 
-  * HOLD LEFT MOUSE BUTTONMove a box +==== Picking ==== 
-  * CONTROL + LEFT MOUSE BUTTONRemove a box+{{page>pipeline:window:cryolo:picking}}
  
-You can change the box size in the main window, by changing the number in the text field labeled //Box size://. Press //Set// to apply it to all picked particles. __For picking, you should the use minimum sized square which encloses your particle.__ 
  
-If you finished picking from your micrographs, you can export your box files with //Files -> Write box files//. +==== Visualize the results ==== 
-Create a new directory called ''train_annotation'' and save it there. Close boxmanager.+{{page>pipeline:window:cryolo:visualize}}
  
-Now create a third folder with the name ''train_image''. Now for each box file, copy the corresponding image from ''full_data'' into ''train_image''((While it is nice to keep the things organized, you don't have to copy your training images in separate folder. In the configuration file (see below) you can also simply specify the full_data directory as "//train_image_folder//". crYOLO will find the correct images using the box files.)). crYOLO will detect image / box file pairs by search taking the box file an searching for an image filename which contains the box filename.+==== Evaluate your results ==== 
 +{{page>pipeline:window:cryolo:evaluate_results}} 
 +===== Picking particles - Without training using general model ===== 
 +Here you can find how to apply the general models we trained for youIf you would like to train your own general model, please see our extra wiki page: [[:cryolo_train_general_model|How to train your own general model]]
  
 +Our general models can be found and downloaded here: [[howto:download_latest_cryolo|Download and Installation]]. 
 +
 +If you followed the installation instructions, you now have to activate the cryolo virtual environment with
 +
 +<code>
 +source activate cryolo
 +</code>
 ==== Start crYOLO ==== ==== Start crYOLO ====
 {{page>pipeline:window:cryolo:start_cryolo}} {{page>pipeline:window:cryolo:start_cryolo}}
  
-==== Configuration ==== +==== Configuration==== 
-You now have to create a configuration file your picking project. It contains all important constants and paths and helps you to reproduce your results later on+In the GUI choose the //config// action. Fill in your target box size and leave the //train_image_folder// and //train_annot_folder// fields empty.
  
-You can either use the commandline to create the configuration file or the GUI.+{{ :pipeline:window:cryolo_filter_options.png?300|}} 
  
-**Using the command line:**+[[:downloads:cryolo_1#general_phosaurusnet_models|There are three general models available]]. It is important that you choose the same filtering options in //"Model/denoising options"// tab as we did during training the general models:
  
-To create an empty file do: +  * General model trained for low-pass filtered images Select //filter// "LOWPASS" and //low_pass_cutoff// of 0.1 
-<code> +  * General model trained for JANNI-denoised images: Select //filter// "JANNI" and the [[:janni_tutorial#download|janni general model]] for //janni_model//. Keep the defaults for //janni_overlap// and //janni_batches// 
-touch config.json +  * General model for negative stain images: Select filter "NONE"
-</code>+
  
-To use the [[:cryolo_nets#network_3_phosaurusnet|Phosaurus network]] copy the following lines into that file: +<html
-<code json config.json+<div style="background-color#cfc ; padding10px; border1px solid green;" 
-+<b>Press the Start button to write the configuration file to disk.  </b> 
-    "model" : +</div> 
-        "architecture"        "PhosaurusNet", +</html>
-        "input_size"          1024, +
-        "anchors":              [160,160], +
-        "max_box_per_image":    600, +
-        "num_patches":          1, +
-        "filter":               [0.1,"filtered"+
-    },+
  
-    "train": { 
-        "train_image_folder":   "train_image/", 
-        "train_annot_folder":   "train_annotation/", 
-        "train_times":          10, 
-        "pretrained_weights":   "model.h5", 
-        "batch_size":           4, 
-        "learning_rate":        1e-4, 
-        "nb_epoch":             50, 
-        "warmup_epochs":        0, 
  
-        "object_scale":         5.0 , 
-        "no_object_scale":      1.0, 
-        "coord_scale":          1.0, 
-        "class_scale":          1.0, 
-        "log_path":             "logs/", 
-        "saved_weights_name":   "model.h5", 
-        "debug":                true 
-    }, 
  
-    "valid": { +<hidden **Create the configuration file using the command line**> 
-        "valid_image_folder":   "", +In the following I assume that you target box size is 220. Please adapt if necessary.
-        "valid_annot_folder":   "",+
  
-        "valid_times"         1 +For the general **[[:cryolo_nets#network_3_phosaurusnet|Phosaurus network]]** trained for **low-pass filtered cryo images** run
-    } +<code> 
-}+cryoloo.py config config_cryolo_.json 220 --filter LOWPASS --low_pass_cutoff 0.1
 </code> </code>
-//[[:cryolo_config|Click here to get more information about the configuration file]]// 
  
-Please set the value in the //"anchors"// field to your desired box size. It should be the same as in your training box files. Furthermore check if the fields //"train_image_folder"// and //"train_annot_folder"// have the correct values. Typically, 20% of the training data are randomly chosen as validation data. If you want to use specific images as validation data, you can move the images and the corresponding box files to the folders specified in //"valid_image_folder"// and //"valid_annot_folder"//. Make sure that they are removed from the original training folder! With the line below, crYOLO automatically filters your images to an absolute frequency 0.1 and write them into a folder "filtered".+For the general model trained with **neural-network denoised cryo images** (with [[:janni_tutorial#download|JANNI's general model]]) run:
 <code> <code>
-"filter":               [0.1,"filtered"].+cryoloo.py config config_cryolo_.json 220 --filter JANNI --janni_model /path/to/janni_general_model.h5
 </code> </code>
-crYOLO will automatically check if an image in full_data is available in the ''filtered'' directory. The filtering is done in parallel. If you don't want to use crYOLO's internal filtering, just remove the line and filter them manually. If you remove the line, don't forget to remove the comma at the end of the line above.  
  
-<note tip> +For the general model for **negative stain data** please run: 
-**Alternative: Using neural-network denoising with JANNI**+<code> 
 +cryoloo.py config config_cryolo_.json 220 --filter NONE 
 +</code> 
 +</hidden>
  
-Since crYOLO 1.4 you can also use neural network denoising with [[:janni|JANNI]]. The easiest way is to use the JANNI's general model ([[:janni#janni_general_model|Download here]]) but you can also [[:janni_tutorial#training_a_model_for_your_data|train JANNI for your data]]. crYOLO directly uses an interface to JANNI to filter your data, you just have to specify the path to your JANNI model, overlap of the patches (default 24), the batch size (default 3) and a path where the denoised images should be written. +==== Picking ==== 
 +{{page>pipeline:window:cryolo:picking}}
  
-To use JANNI's denoising you have to use following entry in your config.json:+==== Visualize the results ==== 
 +{{page>pipeline:window:cryolo:visualize}} 
 +===== Picking particles - Using the general model refined for your data =====
  
-<code> 
-"filter":               ["path/to/janni_model.h5",24,3,"filtered"] 
-</code>  
  
-I recommend to use denoising with JANNI only together with GPU as it is rather slow (~ 1-2 seconds per micrograph on the GPU and 10 seconds per micrograph on the CPU)+Since crYOLO 1.3 you can train model for your data by //fine-tuning// the general model.
  
-</note>+What does //fine-tuning// mean?
  
-Please note the wiki entry about the [[:cryolo_config|crYOLO configuration file]] if you want to know more details.+The general model was trained on a lot of particles with a variety of shapes and therefore learned a very good set of generic features. The last layers, however, learn a pretty abstract representation of the particles and it might be that they do not perfectly fit for your particle at hand. Fine-tuning only traines the last two convolutional layers, but keep the others fixed. This adjusts the more abstract representation for your specific problem
  
-**Using the GUI:**+Why should I //fine-tune// my model instead of training from scratch? 
 +  -  From theory, using fine-tuning should reduce the risk of overfitting ((Overfitting means, that the model works good on the training micrographs, but not on new unseen micrographs. The model just memorized what it saw instead of learning generic features.)) and the amount of training data.  
 +  - The training is much faster, as not all layers have to be trained. 
 +  - The training will need less GPU memory ((We are testing crYOLO with its default configuration on graphic cards with >= 8 GB memory. Using the fine tune mode, it should also work with GPUs with 4 GB memory)) and therefore is usable with NVIDIA cards with less memory.  
 +  
 +However, the fine tune mode is still somewhat experimental and we will update this section if see more advantages or disadvantages.
  
-==== Training ====+If you followed the installation instructions, you now have to activate the cryolo virtual environment with
  
-Now you are ready to train the model. In case you have multiple GPUs, you should first select a free GPU. The following command will show the status of all GPUs: 
 <code> <code>
-nvidia-smi+source activate cryolo
 </code> </code>
-For this tutorial, we assume that you have either a single GPU or want to use GPU 0. Therefore we add '-g 0' after each command below. However, if you have multiple (e.g GPU 0 and GPU 1) you could also use both by adding '-g 0 1' after each command. 
  
-Navigate to the folder with ''config.json'' file, ''train_image'' folder, etc.+==== Data preparation ==== 
 +{{page>pipeline:window:cryolo:data_preparation}}
  
-**Train your network with 3 warmup epochs:**+==== Start crYOLO ====
  
-<code+{{page>pipeline:window:cryolo:start_cryolo}} 
-cryolo_train.py -c config.json -w 3 -g 0 +==== Configuration ==== 
-</code>+{{page>pipeline:window:cryolo:configuration}}
  
-The final model will be called ''model.h5''+{{ :pipeline:window:cryolo_pretrained_weights.png?300|}} 
 +Furthermore, you have to select the model you want to refine. Download the the general model you want to refine specify in the field pretrained_weights in the //"Training options"// tab.
  
-The training stops when the "lossmetric on the validation data does not improve 10 times in a row. This is typically enough. In case want to give the training more time to find the best modelYou might increase the "not changed in a row" parameter to, for example, 15 by adding the flag //-e 15//:+<html> 
 +<div style="background-color: #cfc ; padding: 10px; border: 1px solid green;">  
 +<b>You can now press the Start button to create configuration file </b> 
 +</div> 
 +</html>
  
-<code+<hidden **Create the configuration file using the command line:**
-cryolo_train.py -c config.json -w 3 -g 0 -e 15 + 
-</code>+I assume your box files for training are in the folder ''train_annotation'' and the corresponding images in ''train_image''I furthermore assume that your box size in your box files is 160 and the model you want to refine is ''gmodel_phosnet_20190516.h5''. To create the config config_cryolo.json simply run:
  
-to the training command. 
-==== Picking ==== 
-You can now use the model weights saved in ''model.h5'' (//if you come to this section from another point of the tutorial, this filename might be different like ''gmodel_phosnet_X_Y.h5''//) to pick all your images in the directory ''full_data''. To do this, run:  
 <code> <code>
-cryolo_predict.py -c config.json -w model.h5 -i full_data/ -g 0 -o boxfiles/+cryoloo.py config config_cryolo.json 160 --train_image_folder train_image --train_annot_folder train_annot --pretrained_weights gmodel_phosnet_20190516.h5
 </code> </code>
  
-You will find the picked particles in the directory ''boxfiles''+To get full description of all available options type:
- +
-If you want to pick less conservatively or more conservatively you might want to change the selection threshold from the default of 0.3 to less conservative value like 0.2 or more conservative value like 0.4 using the //-t// parameter:+
 <code> <code>
-cryolo_predict.py -c config.json -w model.h5 -i full_data/ -g 0 -o boxfiles/ -t 0.2+cryoloo.py config -h
 </code> </code>
-However, it is much easier to select the best threshold after picking using the ''CBOX'' files written by crYOLO as described in the next section 
  
-==== Visualize the results ====+If you want to specify seperate validation folders you can use the %%--%%valid_image_folder and %%--%%valid_annot_folder options:
  
-To visualize your results you can use the box manager: 
 <code> <code>
-cryolo_boxmanager.py+cryoloo.py config config_cryolo.json 160 --train_image_folder train_image --train_annot_folder train_annot --pretrained_weights gmodel_phosnet_20190516.h5 --valid_image_folder valid_img --valid_annot_folder valid_annot 
 </code> </code>
-Now press //File -> Open image// folder and the select the ''full_data'' directory. The first image should pop up. Then you import the box files with //File -> Import box files// and select in the ''boxfiles'' folder the ''EMAN'' directory.  
  
-Since version 1.3.0 crYOLO writes cbox files in a separate ''CBOX'' folder. You can import them into the box manager, change the threshold easily using the live preview and write the new box selection into new box files.+</hidden>
  
-[{{ :pipeline:window:ezgif-1-3b966b0324d1.gif?400 |This example shows how to filter particle boxes using the cryolo boxmanager. It is an animated gif. Click on it to see it playing.}}]+==== Training ====
  
-<note warning> +Now you are ready to train the model. In case you have multiple GPUsyou should first select a free GPUThe following command will show the status of all GPUs:
-Right now**this filtering does not yet work for filaments**. +
-</note>+
  
- 
- 
-===== Picking particles - Without training using a general model ===== 
-Here you can find how to apply the general models we trained for you. If you would like to train your own general model, please see our extra wiki page: [[:cryolo_train_general_model|How to train your own general model]] 
- 
-Our general models can be found and downloaded here: [[howto:download_latest_cryolo|Download and Installation]].  
- 
-==== Start crYOLO ==== 
-{{page>pipeline:window:cryolo:start_cryolo}} 
- 
-==== Configuration==== 
-The next step is to create a configuration file. Type: 
 <code> <code>
-touch config.json+nvidia-smi
 </code> </code>
  
-Open the file with your preferred editor.+For this tutorial, we assume that you have either a single GPU or want to use GPU 0
  
-There are two general **[[:cryolo_nets#network_3_phosaurusnet|Phosaurus networks]]** available. One for cryo em images and one for negative stain data. +In the GUI choose the action //train//In the //"Required arguments"// tab select the configuration file we created in the previous step and set the number of warmup periods to zero
-=== CryoEM images === +{{ :pipeline:window:cryolo_refine.png?600 |}}
-For the general **[[:cryolo_nets#network_3_phosaurusnet|Phosaurus network]]** trained for **low-pass filtered cryo images** enter the following inside: +
-<hidden **config.json for low-pass filtered cryo-images**> +
-<code json config.json> +
-    { +
-    "model: { +
-        "architecture":         "PhosaurusNet", +
-        "input_size":           1024, +
-        "anchors":              [205,205], +
-        "max_box_per_image":    700, +
-        "num_patches":          1, +
-        "filter":               [0.1,"tmp_filtered"+
-      } +
-    } +
-</code> +
-</hidden> +
-<html><br></html> +
-For the general model trained with **neural-network denoised cryo images** (with JANNI's general model) enter the following inside: +
-<hidden **config.json for neural-network denoised cryo-images**> +
-<code json config.json> +
-    { +
-    "model"{ +
-        "architecture"        "PhosaurusNet", +
-        "input_size"          1024, +
-        "anchors"             [205,205], +
-        "max_box_per_image":    700, +
-        "num_patches":          1, +
-        "filter":               ["gmodel_janni_20190703.h5",24,3,"tmp_filtered_nn"+
-      } +
-    } +
-</code>+
  
-You can download the file ''gmodel_janni_20190703.h5'' [[https://github.com/MPI-Dortmund/sphire-janni/tree/master/janni_general_models|here]] +In the //"Optional arguments"// tab please check the fine_tune box. 
-</hidden+{{ :pipeline:window:cryolo_refine_02.png?300 |}} 
-<html><br></html> +<note important
-In all cases please set the value in the //"anchors"// field to your desired box sizeIt should be size of the minimum particle enclosing square in pixel+The number of layers to fine tune (specified by layers_fine_tune in the //"Optional arguments"// tab) is still experimentalThe default value of 2 worked for us but you might need more layers.
 +</note>
  
-=== Negative stain images === 
-For the general model for **negative stain data** please use: 
-<hidden **config.json for negative stain images**> 
-<code json config.json> 
-    { 
-    "model" : { 
-        "architecture":         "PhosaurusNet", 
-        "input_size":           1024, 
-        "anchors":              [205,205], 
-        "max_box_per_image":    700, 
-        "num_patches":          1 
-      } 
-    } 
-</code> 
-</hidden> 
  
-Please set the value in the //"anchors"// field to your desired box size. It should be size of the minimum particle enclosing square in pixel. +<note tip>
  
-==== Picking ==== +**Training on CPU** 
-Just follow the description given [[pipeline:window:cryolo#Picking|above]]+
  
-As for a direct trained model, you might want to play around with the confidence threshold, either by using the ''CBOX'' files after prediction or use directly a different confidence threshold using the -t parameter during prediction. +The fine tune mode is especially useful if you want to [[downloads:cryolo_1#run_it_on_the_cpu|train crYOLO on the CPU]]On my local machine it reduced the time for training cryolo on 14 micrographs from 12-15 hours to 4-5 hours
- +</note>
- +
-===== Picking particles - Using the general model refined for your data ===== +
- +
- +
-Since crYOLO 1.3 you can train a model for your data by //fine-tuning// the general model. +
- +
-What does //fine-tuning// mean? +
- +
-The general model was trained on a lot of particles with a variety of shapes and therefore learned a very good set of generic features. The last layers, however, learn a pretty abstract representation of the particles and it might be that they do not perfectly fit for your particle at hand. Fine-tuning only traines the last two convolutional layers, but keep the others fixed. This adjusts the more abstract representation for your specific problem.  +
- +
-Why should I //fine-tune// my model instead of training from scratch? +
-  -  From theory, using fine-tuning should reduce the risk of overfitting ((Overfitting means, that the model works good on the training micrographs, but not on new unseen micrographs. The model just memorized what it saw instead of learning generic features.)).  +
-  The training is much faster, as not all layers have to be trained+
-  - The training will need less GPU memory ((We are testing crYOLO with its default configuration on graphic cards with >= 8 GB memory. Using the fine tune mode, it should also work with GPUs with 4 GB memory)) and therefore is usable with NVIDIA cards with less memory.  +
-  +
-However, the fine tune mode is still somewhat experimental and we will update this section if see more advantages or disadvantages. +
- +
-==== Configuration ====+
  
-You can use almost the same configuration as used when  [[pipeline:window:cryolo#configuration|training from scratch]]You just have to tell crYOLO to use the latest general model((You can download it [[http://sphire.mpg.de/wiki/doku.php?id=downloads:cryolo_1&redirect=1#general_phosaurusnet_models|here]])) by pointing to it with the //"pretrained_weights"// options:+<hidden **Run training with the command line**> 
 +In comparison to the training from scratch, you can skip the warm up training ( -w 0 )Moreover you have to add the //%%--%%fine_tune// flag to tell crYOLO that it should do fine tuning. You can also tell crYOLO how many layers it should fine tune (default is two layers with -lft 2 ):
  
 <code> <code>
-"train":+cryolo_train.py -c config.json -w 0 -g 0 --fine_tune -lft 2
-    [...] +
-    "pretrained_weights":              "LATEST_GENERAL_MODEL.h5", +
-    [...] +
-    "saved_weights_name":   "my_refined_model.h5", +
-    [...] +
-}+
 </code> </code>
 +</hidden>
  
-==== Training ==== 
-In comparision to the training from scratch, you can skip the warm up training. Moreover you have to add the //%%--%%fine_tune// flag: 
- 
-<code> 
-cryolo_train.py -c config.json -w 0 -g 0 --fine_tune 
-</code> 
 ==== Picking ==== ==== Picking ====
-Picking is identical as with a model trained from scratch, so we will skip it here. Just follow the description given [[pipeline:window:cryolo#Picking|above]]+{{page>pipeline:window:cryolo:picking}}
  
-==== Training on CPU ====+==== Visualize the results ==== 
 +{{page>pipeline:window:cryolo:visualize}}
  
- +==== Evaluate your results ==== 
-The fine tune mode is especially useful if you want to [[downloads:cryolo_1#run_it_on_the_cpu|train crYOLO on the CPU]]. On my local machine it reduced the time for training cryolo on 14 micrographs from 12-15 hours to 4-5 hours.+{{page>pipeline:window:cryolo:evaluate_results}}
 ===== Picking filaments - Using a model trained for your data ===== ===== Picking filaments - Using a model trained for your data =====
 Since version 1.1.0 crYOLO supports picking filaments. Since version 1.1.0 crYOLO supports picking filaments.
Line 358: Line 257:
  
 {{:pipeline:window:filament_tracing_02.png?300|}}  {{:pipeline:window:filament_tracing_03.png?300|}} {{:pipeline:window:filament_tracing_02.png?300|}}  {{:pipeline:window:filament_tracing_03.png?300|}}
 +
 +If you followed the installation instructions, you now have to activate the cryolo virtual environment with
 +
 +<code>
 +source activate cryolo
 +</code>
 +
  
 ==== Data preparation ==== ==== Data preparation ====
-{{ :pipeline:window:settings_e2helixboxer.png?300|}} As described [[pipeline:window:cryolo#data_preparation|previously]], filtering your image using a low-pass filter is probably a good idea. +{{ :pipeline:window:settings_e2helixboxer.png?300|}} 
  
-After this is done, you have to prepare training data for your model. +The first step is to create the training data for your model. Right now, you have to use the e2helixboxer.py for this:
- Right now, you have to use the e2helixboxer.py to generate the training data:+
 <code> <code>
 e2helixboxer.py --gui my_images/*.mrc e2helixboxer.py --gui my_images/*.mrc
Line 370: Line 275:
 After tracing your training data in e2helixboxer, export them using //File -> Save//. Make sure that you export particle coordinates as this the only format supported right now (see screenshot). In the following example, it is expected that you exported into a folder called "train_annotation". After tracing your training data in e2helixboxer, export them using //File -> Save//. Make sure that you export particle coordinates as this the only format supported right now (see screenshot). In the following example, it is expected that you exported into a folder called "train_annotation".
  
-==== Configuration ==== +For projects with roughly 20 filaments per image we successfully trained on 40 images (=> 800 filaments)
-You can configure it the same way as for a "normal" project.+
  
-<code json config.json> +==== Start crYOLO ==== 
-{ +{{page>pipeline:window:cryolo:start_cryolo}}
-    "model"{ +
-        "architecture"        "PhosaurusNet", +
-        "input_size"          1024, +
-        "anchors"             [160,160], +
-        "max_box_per_image":    600, +
-        "num_patches":          1, +
-        "filter":               [0.1,"filtered"+
-    },+
  
-    "train": { 
-        "train_image_folder":   "train_image/", 
-        "train_annot_folder":   "train_annotation/", 
-        "train_times":          10, 
-        "pretrained_weights":   "model.h5", 
-        "batch_size":           4, 
-        "learning_rate":        1e-4, 
-        "nb_epoch":             50, 
-        "warmup_epochs":        0, 
  
-        "object_scale":         5.0 , +==== Configuration ==== 
-        "no_object_scale"     1.0, +{{page>pipeline:window:cryolo:configuration}}
-        "coord_scale"         1.0, +
-        "class_scale"         1.0, +
-        "log_path":             "logs/", +
-        "saved_weights_name":   "model.h5", +
-        "debug":                true +
-    },+
  
-    "valid": { 
-        "valid_image_folder":   "", 
-        "valid_annot_folder":   "", 
  
-        "valid_times":          1 +<html> 
-    } +<div style="background-color#cfc ; padding: 10px; border: 1px solid green;">  
-} +<b>You can now press the Start button to create you configuration file. </b> 
-</code>+</div> 
 +</html>
  
-//[[:cryolo_config|Click here to get more information about the configuration file]]// 
- 
-Just adapt the anchors accordingly to your box size. 
  
 +{{page>pipeline:window:cryolo:configuration_cmdl_normal}}
 ==== Training ==== ==== Training ====
  
-In principle, there is not much difference in training crYOLO for filament picking and particle picking. For project with roughly 20 filaments per image we successfully trained on 40 images (=> 800 filaments). However, in our experience the warm-up phase and training need a little bit more time:+{{page>pipeline:window:cryolo:training}} 
 +==== Picking ==== 
 +Select the action prediction and fill all arguments in the “Required arguments” tab 
 +{{ :pipeline:window:cryolo:cryolo_prediction.png?600 |}}
  
-**Train your network with 10 warm up epochs:**+Now select the "Filament options" tab and check "Activate filament mode", specifiy the filament width (e.g. 100) and define the box distance (e.g. 20 for 90% overlap when using a box size if 200):
  
-<code> +{{ :pipeline:window:cryolo_filament.png?700 |}}
-cryolo_train.py -c config.json -w 10 -g 0 -e 10 +
-</code>+
  
-The final model will be called ''model.h5'' +Press the start button to start the picking. The directory ''output_boxes'' will be created and all results are saved there. The format is the eman2 helix format with particle coordinates
-==== Picking ==== +
- +
-The biggest difference in picking filaments with crYOLO is during predictionHowever, there are just three additional parameters needed:+
  
-  * //- -filament//Option that tells crYOLO that you want to predict filaments +You can find a detailed description [[:cryolo_filament_import_relion|how to import crYOLO filament coordinates into Relion here]].
-  * //-fw//: Filament width (pixels) +
-  * //-bd//: Inter-Box distance (pixels).+
  
 +<hidden **Run prediction in commmand line**>
 Let's assume you want to pick a filament with a width of 100 pixels (-fw 100). The box size is 200x200 and you want a 90% overlap (-bd 20). Moreover, you wish that each filament has at least 6 boxes (-mn 6). The micrographs are in the ''full_data'' directory. Than the picking command would be: Let's assume you want to pick a filament with a width of 100 pixels (-fw 100). The box size is 200x200 and you want a 90% overlap (-bd 20). Moreover, you wish that each filament has at least 6 boxes (-mn 6). The micrographs are in the ''full_data'' directory. Than the picking command would be:
 <code> <code>
-cryolo_predict.py -c config.json -w model.h5 -i full_data --filament -fw 100 -bd 20 -o boxes/ -g 0 -mn 6+cryolo_predict.py -c cryolo_config.json -w cryolo_model.h5 -i full_data --filament -fw 100 -bd 20 -o boxes/ -g 0 -mn 6
 </code> </code>
 +</hidden>
 +
  
-The directory ''boxes'' will be created and all results are saved there. The format is the eman2 helix format with particle coordinates. You can find a detailed description [[:cryolo_filament_import_relion|how to import crYOLO filament coordinates into Relion here]]. 
  
 ==== Visualize the results ==== ==== Visualize the results ====
-You can use the boxmanager as described [[pipeline:window:cryolo#visualize_the_results|previously]]. +{{page>pipeline:window:cryolo:visualize}}
- +
-===== Evaluate your results ===== +
-<note warning> +
-Unfortunately, this script **does not work for filamental data**. +
-</note> +
-The evaluation tool allows you, based on your validation data, to get statistics about your training. +
-If you followed the tutorial, the validation data are selected randomly. With crYOLO 1.1.0 a run file for each training is created and saved into the folder runfiles/ in your project directory. This run file contains which files were selected for validation, and you can run your evaluation as follows: +
-<code> +
-cryolo_evaluation.py -c config.json -w model.h5 -r runfiles/run_YearMonthDay-HourMinuteSecond.json -g 0 +
-</code> +
- +
-The result looks like this: +
-{{:pipeline:window:eval_example.png?900 |}} +
- +
-The table contains several statistics: +
-  * AUC: Area under curve of the precision-recall curve. Overall summary statistics. Perfect classifier = 1, Worst classifier = 0 +
-  * Topt: Optimal confidence threshold with respect to the F1 score. It might not be ideal for your picking, as the F1 score weighs recall and precision equally. However in SPA, recall is often more important than the precision.   +
-  * R (Topt): Recall using the optimal confidence threshold.  +
-  * R (0.3): Recall using a confidence threshold of 0.3. +
-  * R (0.2): Recall using a confidence threshold of 0.2. +
-  * P (Topt): Precision using the optimal confidence threshold.  +
-  * P (0.3): Precision using a confidence threshold of 0.3. +
-  * P (0.2): Precision using a confidence threshold of 0.2. +
-  * F1 (Topt): Harmonic mean of precision and recall using the optimal confidence threshold. +
-  * F1 (0.3): Harmonic mean of precision and recall using a confidence threshold of 0.3. +
-  * F1 (0.2): Harmonic mean of precision and recall using a confidence threshold of 0.2. +
-  * IOU (Topt): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with the optimal confidence threshold. +
-  * IOU (0.3): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with a confidence threshold of 0.3. +
-  * IOU (0.2): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with a confidence threshold of 0.2. +
- +
-If the training data consists of multiple folders, then evaluation will be done for each folder separately.  +
-Furthermore, crYOLO estimates the optimal picking threshold regarding the F1 Score and F2 Score. Both are basically average values of the recall and prediction, whereas the F2 score puts more weights on the recall, which is in the cryo-em often more important.+
  
  
-===== Advanced parameters ===== 
-During **training** (//cryolo_train//), there are the following advanced parameters: 
-  * //%%--%%warm_restarts//: With this option the learning rate is decreasing after each epoch and then reset after a couple of epochs. 
-  * //%%--%%num_cpu NUMBER_OF_CPUS//: Number of CPU cores used during training 
-  * //%%--%%gpu_fraction FRACTION//: Number between 0 - 1 quantifying the fraction of GPU memory that is reserved by crYOLO 
-  * //%%--%%skip_augmentation//: Set this flaq, if crYOLO should skip the data augmentation (not recommended). 
  
-During **picking** (//cryolo_predict//), there are five advanced parameters: 
-  * //-t CONFIDENCE_THRESHOLD//: With the -t parameter, you can let the crYOLO pick more conservative (e.g by adding -t 0.4 to the picking command) or less conservative (e.g by adding -t 0.2 to the picking command). The valid parameter range is 0 to 1. 
-  * //-d DISTANCE_IN_PIXEL//: With the -d parameter you can filter your picked particles. Boxes with a distance (pixel) less than this value will be removed. 
-  * //-pbs PREDICTION_BATCH_SIZE//: With the -pbs parameter you can set the number of images picked as batch. Default is 3. 
-  * //%%--%%otf//: Instead of saving the filtered images into an seperate directory, crYOLO will filter them on-the-fly and don't write them to disk. 
-  * //%%--%%num_cpu NUMBER_OF_CPUS//: Number of CPU cores used during prediction 
-  * //%%--%%gpu_fraction FRACTION//: Number between 0 -1 quantifying the fraction of GPU memory that is reserved by crYOLO 
-  * //-sr SEARCH_RANGE_FACTOR//: (FILAMENT MODE) The search range for connecting boxes is the box size times this factor. Default is 1.41 
  
 ===== Help ===== ===== Help =====
pipeline/window/cryolo.txt · Last modified: 2021/02/19 10:00 by twagner