This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
pipeline:window:cryolo [2019/07/29 16:02] twagner [Configuration] |
pipeline:window:cryolo [2019/09/17 13:40] twagner [Evaluate your results] |
||
---|---|---|---|
Line 17: | Line 17: | ||
< | < | ||
+ | |||
You can find more technical details in our paper: | You can find more technical details in our paper: | ||
Line 36: | Line 37: | ||
You can find the download and installation instructions here: [[howto: | You can find the download and installation instructions here: [[howto: | ||
- | ===== Picking particles - Using a model trained for your data ===== | + | ===== Tutorials |
+ | Depending what you want to do, you can follow one of these self-contained Tutorials: | ||
- | ==== Data preparation ==== | + | - I would like to train a model from scratch for picking my particles |
- | CrYOLO supports MRC, TIF and JPG files. It can work with 32 bit data, 8 bit data and 16 bit data. | + | - I would like to train a model from scratch for picking filaments. |
- | It will work on original MRC files, but it will probably improve when the data are denoised. Therefore you should low-pass filter them to a reasonable level. Since Version 1.2 crYOLO can automatically do that for you. You just have to add | + | - I would like to refine |
- | < | + | |
- | " | + | |
- | </ | + | |
- | to the model section in your config file to filter your images down to an absolute frequency of 0.1. The filtered images are saved in folder '' | + | The **first and the second tutorial** are the most common use cases and well tested. The **third tutorial** is still experimental but might give you better results |
- | crYOLO will automatically check if an image in full_data is available in the '' | ||
- | <hidden **Alternative: | ||
- | < | ||
- | Since crYOLO 1.4 you can also use neural network denoising with [[: | ||
- | To use JANNI' | + | ===== Picking particles - Using a model trained for your data ===== |
- | + | This tutorial explains you how to train a model specific for you dataset. | |
- | < | + | |
- | " | + | |
- | </ | + | |
- | + | ||
- | I recommend to use denoising with JANNI only together with a GPU as it is rather slow (~ 1-2 seconds per micrograph on the GPU and 10 seconds per micrograph on the CPU) | + | |
- | + | ||
- | < | + | |
- | </ | + | |
- | < | + | |
If you followed the installation instructions, | If you followed the installation instructions, | ||
Line 71: | Line 57: | ||
source activate cryolo | source activate cryolo | ||
</ | </ | ||
+ | ==== Data preparation ==== | ||
+ | {{page> | ||
- | In the following I will assume that your image data is in the folder '' | + | ==== Start crYOLO ==== |
+ | {{page> | ||
- | The next step is to create training data. To do so, we have to pick single particles manually in several micrographs. Ideally, the micrographs are picked to completion. [[: | + | ==== Configuration ==== |
- | One may ask how many micrographs have to be picked? It depends! Typically 10 micrographs are a good start. However, that number may increase / decrease due to several factors: | + | {{page> |
- | * A very heterogenous background could make it necessary to pick more micrographs. | + | |
- | * If your micrograph is only sparsely decorated, you may need to pick more micrographs. | + | |
- | We recommend that you start with 10 micrographs, | + | |
+ | < | ||
+ | <div style=" | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
- | {{: | ||
- | To create your training data, crYOLO is shipped with a tool called " | ||
- | Start the box manager with the following command: | + | {{page>pipeline: |
- | < | + | |
- | cryolo_boxmanager.py | + | |
- | </code> | + | |
- | Now press //File -> Open image folder// and the select the '' | + | ==== Training ==== |
- | * LEFT MOUSE BUTTON: Place a box | + | {{page> |
- | * HOLD LEFT MOUSE BUTTON: Move a box | + | ==== Picking ==== |
- | * CONTROL + LEFT MOUSE BUTTON: Remove a box | + | {{page> |
- | You can change the box size in the main window, by changing the number in the text field labeled //Box size://. Press //Set// to apply it to all picked particles. For picking, you should the use minimum sized square which encloses your particle. | ||
- | If you finished picking from your micrographs, | + | ==== Visualize the results ==== |
- | Create a new directory called '' | + | {{page> |
- | Now create a third folder with the name '' | + | ==== Evaluate |
+ | {{page> | ||
+ | ===== Picking particles - Without | ||
+ | Here you can find how to apply the general models we trained | ||
- | ==== Configuration ==== | + | Our general models can be found and downloaded here: [[howto:download_latest_cryolo|Download and Installation]]. |
- | You now have to create a config file your picking project. To do this type: | + | |
- | < | + | |
- | touch config.json | + | |
- | </ | + | |
- | To use the [[: | + | If you followed |
- | <code json config.json> | + | |
- | { | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | }, | + | |
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | |||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | }, | ||
- | |||
- | " | ||
- | " | ||
- | " | ||
- | |||
- | " | ||
- | } | ||
- | } | ||
- | </ | ||
- | // | ||
- | |||
- | Please set the value in the //" | ||
< | < | ||
- | " | + | source activate cryolo |
</ | </ | ||
- | crYOLO | + | ==== Start crYOLO |
+ | {{page> | ||
- | <note tip> | + | ==== Configuration==== |
- | **Alternative: | + | In the GUI choose the //config// action. Fill in your target box size and leave the // |
- | Since crYOLO 1.4 you can also use neural network denoising with [[:janni|JANNI]]. The easiest way is to use the JANNI' | + | {{ :pipeline:window:cryolo_filter_options.png? |
- | To use JANNI' | + | [[: |
- | < | + | * General model trained for low-pass filtered images : Select //filter// "LOWPASS" |
- | "filter" | + | * General model trained for JANNI-denoised images: Select // |
- | </code> | + | * General model for negative stain images: Select filter " |
- | I recommend to use denoising with JANNI only together with a GPU as it is rather slow (~ 1-2 seconds per micrograph on the GPU and 10 seconds per micrograph on the CPU) | + | < |
+ | <div style=" | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
- | </ | ||
- | Please note the wiki entry about the [[: | ||
+ | <hidden **Create the configuration file using the command line**> | ||
+ | In the following I assume that you target box size is 220. Please adapt if necessary. | ||
+ | For the general **[[: | ||
+ | < | ||
+ | cryoloo.py config config_cryolo_.json 220 --filter LOWPASS --low_pass_cutoff 0.1 | ||
+ | </ | ||
- | ==== Training ==== | + | For the general model trained with **neural-network denoised cryo images** (with [[: |
+ | < | ||
+ | cryoloo.py config config_cryolo_.json 220 --filter JANNI --janni_model / | ||
+ | </ | ||
- | Now you are ready to train the model. In case you have multiple GPUs, you should first select a free GPU. The following command will show the status of all GPUs: | + | For the general |
< | < | ||
- | nvidia-smi | + | cryoloo.py config config_cryolo_.json 220 --filter NONE |
</ | </ | ||
- | For this tutorial, we assume that you have either a single GPU or want to use GPU 0. Therefore we add '-g 0' after each command below. However, if you have multiple (e.g GPU 0 and GPU 1) you could also use both by adding '-g 0 1' after each command. | + | </ |
- | Navigate to the folder with '' | + | ==== Picking ==== |
+ | {{page> | ||
- | **1. Warm up your network** | + | ==== Visualize the results ==== |
+ | {{page> | ||
+ | ===== Picking particles - Using the general model refined for your data ===== | ||
- | < | ||
- | cryolo_train.py -c config.json -w 3 -g 0 | ||
- | </ | ||
- | **2. Train your network** | + | Since crYOLO 1.3 you can train a model for your data by // |
- | < | + | What does //fine-tuning// mean? |
- | cryolo_train.py | + | |
- | </code> | + | |
- | The final model will be called '' | + | The general |
- | The training | + | Why should I // |
- | < | + | - From theory, using fine-tuning should reduce the risk of overfitting ((Overfitting means, that the model works good on the training micrographs, |
- | cryolo_train.py | + | - The training is much faster, as not all layers have to be trained. |
- | </code> | + | - The training will need less GPU memory ((We are testing crYOLO with its default configuration on graphic cards with >= 8 GB memory. Using the fine tune mode, it should also work with GPUs with 4 GB memory)) and therefore is usable with NVIDIA cards with less memory. |
- | to the training command. | + | |
- | ==== Picking ==== | + | However, the fine tune mode is still somewhat experimental and we will update this section if see more advantages or disadvantages. |
- | You can now use the model weights saved in '' | + | |
- | < | + | |
- | cryolo_predict.py -c config.json -w model.h5 -i full_data/ -g 0 -o boxfiles/ | + | |
- | </ | + | |
- | You will find the picked particles in the directory '' | + | If you followed |
- | If you want to pick less conservatively or more conservatively you might want to change the selection threshold from the default of 0.3 to a less conservative value like 0.2 or more conservative value like 0.4 using the //-t// parameter: | ||
< | < | ||
- | cryolo_predict.py -c config.json -w model.h5 -i full_data/ -g 0 -o boxfiles/ -t 0.2 | + | source activate cryolo |
</ | </ | ||
- | However, it is much easier to select the best threshold after picking using the '' | ||
- | ==== Visualize the results | + | ==== Data preparation |
+ | {{page> | ||
- | To visualize your results you can use the box manager: | + | ==== Start crYOLO ==== |
- | < | + | |
- | cryolo_boxmanager.py | + | |
- | </ | + | |
- | Now press //File -> Open image// folder and the select the '' | + | |
- | Since version 1.3.0 crYOLO writes cbox files in a separate '' | + | {{page> |
+ | ==== Configuration ==== | ||
+ | {{page> | ||
- | [{{ : | + | {{ : |
+ | Furthermore, | ||
- | <note warning> | + | <html> |
- | Right now, **this filtering does not yet work for filaments**. | + | <div style=" |
- | </note> | + | < |
+ | </ | ||
+ | </html> | ||
+ | <hidden **Create the configuration file using the command line:**> | ||
+ | I assume your box files for training are in the folder '' | ||
- | ===== Picking particles - Without training using a general model ===== | + | < |
- | Here you can find how to apply the general models we trained for you. If you would like to train your own general model, please see our extra wiki page: [[: | + | cryoloo.py config config_cryolo.json 160 --train_image_folder train_image --train_annot_folder train_annot --pretrained_weights gmodel_phosnet_20190516.h5 |
+ | </ | ||
- | Our general models can be found and downloaded here: [[howto: | + | To get a full description of all available options type: |
- | ==== Configuration==== | + | |
- | The next step is to create | + | |
< | < | ||
- | touch config.json | + | cryoloo.py config -h |
</ | </ | ||
- | Open the file with your preferred editor. | + | If you want to specify seperate validation folders you can use the %%--%%valid_image_folder and %%--%%valid_annot_folder options: |
- | There are two general **[[: | + | < |
- | === CryoEM images === | + | cryoloo.py config |
- | For the general **[[: | + | |
- | <hidden **config.json for low-pass filtered cryo-images**> | + | |
- | < | + | |
- | { | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | } | + | |
- | } | + | |
- | </ | + | |
- | </ | + | |
- | < | + | |
- | For the general model trained with **neural-network denoised cryo images** (with JANNI' | + | |
- | <hidden **config.json | + | |
- | <code json config.json> | + | |
- | { | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | } | + | |
- | } | + | |
</ | </ | ||
- | You can download the file '' | ||
</ | </ | ||
- | < | ||
- | In all cases please set the value in the //" | ||
- | === Negative stain images | + | ==== Training ==== |
- | For the general model for **negative stain data** please use: | + | |
- | <hidden **config.json for negative stain images**> | + | |
- | <code json config.json> | + | |
- | { | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | } | + | |
- | } | + | |
- | </ | + | |
- | </ | + | |
- | Please set the value in the //" | + | Now you are ready to train the model. In case you have multiple GPUs, you should |
- | ==== Picking ==== | + | < |
- | Just follow the description given [[pipeline: | + | nvidia-smi |
+ | </ | ||
- | As for a direct trained model, you might want to play around with the confidence threshold, either by using the '' | + | For this tutorial, we assume that you have either a single GPU or want to use GPU 0. |
+ | In the GUI choose the action //train//. In the //" | ||
+ | {{ : | ||
- | ===== Picking particles - Using the general model refined | + | In the //" |
+ | {{ : | ||
+ | <note important> | ||
+ | The number of layers to fine tune (specified by layers_fine_tune in the //" | ||
+ | </ | ||
- | Since crYOLO 1.3 you can train a model for your data by // | + | <note tip> |
- | What does // | + | **Training on CPU** |
- | The general model was trained | + | The fine tune mode is especially useful if you want to [[downloads: |
- | + | </note> | |
- | Why should I // | + | |
- | - From theory, using fine-tuning should reduce the risk of overfitting ((Overfitting means, that the model works good on the training | + | |
- | | + | |
- | - The training will need less GPU memory ((We are testing crYOLO with its default configuration on graphic cards with >= 8 GB memory. Using the fine tune mode, it should also work with GPUs with 4 GB memory)) and therefore is usable with NVIDIA cards with less memory. | + | |
- | + | ||
- | However, the fine tune mode is still somewhat experimental and we will update this section if see more advantages or disadvantages. | + | |
- | + | ||
- | ==== Configuration ==== | + | |
- | You can use almost | + | <hidden **Run training with the command line**> |
+ | In comparison to the training from scratch, you can skip the warm up training ( -w 0 ). Moreover you have to add the //%%--%%fine_tune// flag to tell crYOLO that it should do fine tuning. You can also tell crYOLO how many layers it should fine tune (default is two layers | ||
< | < | ||
- | " | + | cryolo_train.py -c config.json -w 0 -g 0 --fine_tune -lft 2 |
- | [...] | + | |
- | " | + | |
- | [...] | + | |
- | " | + | |
- | [...] | + | |
- | } | + | |
</ | </ | ||
+ | </ | ||
- | ==== Training ==== | ||
- | In comparision to the training from scratch, you can skip the warm up training. Moreover you have to add the // | ||
- | |||
- | < | ||
- | cryolo_train.py -c config.json -w 0 -g 0 --fine_tune | ||
- | </ | ||
==== Picking ==== | ==== Picking ==== | ||
- | Picking is identical as with a model trained from scratch, so we will skip it here. Just follow the description given [[pipeline: | + | {{page>pipeline: |
- | ==== Training on CPU ==== | + | ==== Visualize the results |
+ | {{page> | ||
- | + | ==== Evaluate your results ==== | |
- | The fine tune mode is especially useful if you want to [[downloads:cryolo_1# | + | {{page> |
===== Picking filaments - Using a model trained for your data ===== | ===== Picking filaments - Using a model trained for your data ===== | ||
Since version 1.1.0 crYOLO supports picking filaments. | Since version 1.1.0 crYOLO supports picking filaments. | ||
Line 362: | Line 257: | ||
{{: | {{: | ||
+ | |||
+ | If you followed the installation instructions, | ||
+ | |||
+ | < | ||
+ | source activate cryolo | ||
+ | </ | ||
+ | |||
==== Data preparation ==== | ==== Data preparation ==== | ||
- | {{ : | + | {{ : |
- | After this is done, you have to prepare | + | The first step is to create the training data for your model. Right now, you have to use the e2helixboxer.py |
- | Right now, you have to use the e2helixboxer.py | + | |
< | < | ||
e2helixboxer.py --gui my_images/ | e2helixboxer.py --gui my_images/ | ||
Line 374: | Line 275: | ||
After tracing your training data in e2helixboxer, | After tracing your training data in e2helixboxer, | ||
- | ==== Configuration ==== | + | For projects with roughly 20 filaments per image we successfully trained on 40 images (=> 800 filaments). |
- | You can configure it the same way as for a " | + | |
- | <code json config.json> | + | ==== Start crYOLO ==== |
- | { | + | {{page> |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | | + | |
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | + | ==== Configuration ==== |
- | " | + | {{page> |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | " | + | |
- | | + | |
- | " | ||
- | " | ||
- | " | ||
- | " | + | < |
- | } | + | <div style="background-color: #cfc ; padding: 10px; border: 1px solid green;"> |
- | } | + | < |
- | </code> | + | </ |
+ | </html> | ||
- | // | ||
- | |||
- | Just adapt the anchors accordingly to your box size. | ||
+ | {{page> | ||
==== Training ==== | ==== Training ==== | ||
- | In principle, there is not much difference in training | + | {{page> |
+ | ==== Picking ==== | ||
+ | Select | ||
+ | {{ : | ||
- | **1. Warm up your network** | + | Now select the " |
- | < | + | {{ : |
- | cryolo_train.py -c config.json -w 10 -g 0 | + | |
- | </ | + | |
- | **2. Train your network** | + | Press the start button to start the picking. The directory '' |
- | < | + | You can find a detailed description [[: |
- | cryolo_train.py -c config.json -w 0 -g 0 -e 10 | + | |
- | </ | + | |
- | + | ||
- | The final model will be called '' | + | |
- | ==== Picking ==== | + | |
- | + | ||
- | The biggest difference in picking filaments with crYOLO | + | |
- | + | ||
- | * //- -filament//: Option that tells crYOLO that you want to predict filaments | + | |
- | * //-fw//: Filament width (pixels) | + | |
- | * //-bd//: Inter-Box distance (pixels). | + | |
+ | <hidden **Run prediction in commmand line**> | ||
Let's assume you want to pick a filament with a width of 100 pixels (-fw 100). The box size is 200x200 and you want a 90% overlap (-bd 20). Moreover, you wish that each filament has at least 6 boxes (-mn 6). The micrographs are in the '' | Let's assume you want to pick a filament with a width of 100 pixels (-fw 100). The box size is 200x200 and you want a 90% overlap (-bd 20). Moreover, you wish that each filament has at least 6 boxes (-mn 6). The micrographs are in the '' | ||
< | < | ||
- | cryolo_predict.py -c config.json -w model.h5 -i full_data --filament -fw 100 -bd 20 -o boxes/ -g 0 -mn 6 | + | cryolo_predict.py -c cryolo_config.json -w cryolo_model.h5 -i full_data --filament -fw 100 -bd 20 -o boxes/ -g 0 -mn 6 |
</ | </ | ||
+ | </ | ||
+ | |||
- | The directory '' | ||
==== Visualize the results ==== | ==== Visualize the results ==== | ||
- | You can use the boxmanager as described [[pipeline: | + | {{page>pipeline: |
- | + | ||
- | ===== Evaluate your results ===== | + | |
- | <note warning> | + | |
- | Unfortunately, | + | |
- | </ | + | |
- | The evaluation tool allows you, based on your validation data, to get statistics about your training. | + | |
- | If you followed the tutorial, the validation data are selected randomly. With crYOLO 1.1.0 a run file for each training is created and saved into the folder runfiles/ in your project directory. This run file contains which files were selected for validation, and you can run your evaluation as follows: | + | |
- | < | + | |
- | cryolo_evaluation.py -c config.json -w model.h5 -r runfiles/ | + | |
- | </ | + | |
- | + | ||
- | The result looks like this: | + | |
- | {{: | + | |
- | + | ||
- | The table contains several statistics: | + | |
- | * AUC: Area under curve of the precision-recall curve. Overall summary statistics. Perfect classifier = 1, Worst classifier = 0 | + | |
- | * Topt: Optimal confidence threshold with respect to the F1 score. It might not be ideal for your picking, as the F1 score weighs recall and precision equally. However in SPA, recall is often more important than the precision. | + | |
- | * R (Topt): Recall using the optimal confidence threshold. | + | |
- | * R (0.3): Recall using a confidence threshold of 0.3. | + | |
- | * R (0.2): Recall using a confidence threshold of 0.2. | + | |
- | * P (Topt): Precision using the optimal confidence threshold. | + | |
- | * P (0.3): Precision using a confidence threshold of 0.3. | + | |
- | * P (0.2): Precision using a confidence threshold of 0.2. | + | |
- | * F1 (Topt): Harmonic mean of precision and recall using the optimal confidence threshold. | + | |
- | * F1 (0.3): Harmonic mean of precision and recall using a confidence threshold of 0.3. | + | |
- | * F1 (0.2): Harmonic mean of precision and recall using a confidence threshold of 0.2. | + | |
- | * IOU (Topt): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with the optimal confidence threshold. | + | |
- | * IOU (0.3): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with a confidence threshold of 0.3. | + | |
- | * IOU (0.2): Intersection over union of the auto-picked particles and the corresponding ground-truth boxes. The higher, the better -- evaluated with a confidence threshold of 0.2. | + | |
- | + | ||
- | If the training data consists of multiple folders, then evaluation will be done for each folder separately. | + | |
- | Furthermore, | + | |
Line 494: | Line 327: | ||
* // | * // | ||
* // | * // | ||
+ | * // | ||
+ | * //%%-%%lft NUM_LAYER_FINETUNE//: | ||
- | During **picking** (// | + | During **picking** (// |
* //-t CONFIDENCE_THRESHOLD//: | * //-t CONFIDENCE_THRESHOLD//: | ||
* //-d DISTANCE_IN_PIXEL//: | * //-d DISTANCE_IN_PIXEL//: | ||
Line 501: | Line 336: | ||
* // | * // | ||
* // | * // | ||
- | * // | + | * // |
- | * //-sr SEARCH_RANGE_FACTOR//: | + | * // |
+ | * //-sr SEARCH_RANGE_FACTOR//: | ||
+ | |||
===== Help ===== | ===== Help ===== |