The documentation has moved to https://cryolo.readthedocs.io
This tutorial explains you how to train a model specific for you dataset.
If you followed the installation instructions, you now have to activate the cryolo virtual environment with
source activate cryolo
In the following I will assume that your image data is in the folder
The next step is to create training data. To do so, we have to pick single particles manually in several micrographs. Ideally, the micrographs are picked to completion. However, it is not necessary to pick all particles. crYOLO will still converge if you miss some (or even many).
It depends! Typically 10 micrographs are a good start. However, that number may increase / decrease due to several factors:
We recommend that you start with 10 micrographs, then autopick your data, check the results and finally decide whether to add more micrographs to your training set. If you refine a general model, even 5 micrographs might be enough.
Start the box manager with the following command:
Now press File → Open image folder and the select the
full_data directory. The first image should pop up. You can navigate in the directory tree through the images. Here is how to pick particles:
You might want to run a low pass filter before you start picking the particles. Just press the [Apply] button to get a low pass filtered version of your currently selected micrograph. An absolute frequency cut-off of 0.1. The allowed values are 0 - 0.5. Lower values means stronger filtering.
You can change the box size in the main window, by changing the number in the text field labeled Box size:. Press [Set] to apply it to all picked particles. For picking, you should the use minimum sized square which encloses your particle.
If you finished picking from your micrographs, you can export your box files with Files → Write box files.
Create a new directory called
train_annotation and save it there. Close boxmanager.
Now create a third folder with the name
train_image. Now for each box file, copy the corresponding image from
train_image1). crYOLO will detect image / box file pairs by taking the box file and searching for an image filename which contains the box filename.
You can use crYOLO either by command line or by using the GUI. The GUI should be easier for most users. You can start it with:
The crYOLO GUI is essentially a visualization of the command line interface. On left side, you find all possible “Actions”:
Each action has several parameters which are organized in tabs. Once you have chosen your settings you can press [Start] (just as example, don't press it now ), the command will be applied and crYOLO shows you the output:
It will tell you if something went wrong. Moreover, it will tell you all parameters used. Pressing [Back] brings you back to your settings, where you can either edit the settings (in case something went wrong) or go to the next action.
You now have to create a configuration file for your picking project. It contains all important constants and paths and helps you to reproduce your results later on.
You can either use the command line to create the configuration file or the GUI. For most users, the GUI should be easier. Select the config action and fill in the general fields:
At this point you could already press the [Start] button to generate the config file but you might want to take these options into account:
Since crYOLO 1.4 you can also use neural network denoising with JANNI. The easiest way is to use the JANNI's general model (Download here) but you can also train JANNI for your data. crYOLO directly uses an interface to JANNI to filter your data, you just have to change the filter argument in the Denoising tab from LOWPASS to JANNI and specify the path to your JANNI model: I recommend to use denoising with JANNI only together with a GPU as it is rather slow (~ 1-2 seconds per micrograph on the GPU and 10 seconds per micrograph on the CPU)
You can also modify all options and parameters directly in the config.json file. It can be opened by any text editor. Please note the wiki entry about the crYOLO configuration file if you want to know more details.
Now you are ready to train the model. In case you have multiple GPUs, you should first select a free GPU. The following command will show the status of all GPUs:
For this tutorial, we assume that you have either a single GPU or want to use GPU 0.
In the “Optional arguments” tab you can change the GPU that should be used by crYOLO. If you have multiple GPUs (e.g. nvidia-smi lists GPU 0 and GPU 1) you can also use both by setting the GPU argument to '0 1'.
The default number of warmup epochs4) is fine as long as you don't want to refine an existing model. During the warmup training epochs it will not try to estimate the size of your particle, which helps crYOLO to converge.
When you start the training, it will stop when the “loss” metric on the validation data does not improve 10 times in a row. This is typically enough. In case you want to give the training more time to find the best model can increase the “not changed in a row” parameter to a higher value by setting the early argument in the “Optional arguments” to, for example, 15.
The final model will be written to disk as specified in saved_weights_name in your configuration file.
In crYOLO, all particles have an assigned confidence value. By default, all particles with a confidence value below 0.3 are discarded. If you want to pick less or more conservatively you might want to change this confidence threshold to a less (e.g. 0.2) or more (e.g. 0.4) conservative value in the “Optional arguments” tab.
However, it is much easier to select the best threshold after picking using the
CBOX files written by crYOLO as described in the next section.
When this option is activated, crYOLO will monitor your input folder. This especially useful for automation purposes. You can stop the monitor mode by writing an empty file with the name “stop.cryolo” in the input directory. Just add –monitor in the command line or check the monitor box in in the “Optional arguments” tab.
After picking is done, you can find four folders in your specified output folder:
To visualize your results you can use the boxmanager:
As image_dir you select the
full_data directory. As box_dir you select the
CBOX folder (or
EMAN_HELIX_SEGMENTED in case of filaments).
CBOX files contain besides the particle coordinates more information like the confidence and the estimated size of each particle. When importing .cbox files into the box manager, it enables more filtering options in the GUI. You can plot size- and confidence distributions. Moreover, you can change the confidence threshold, minimum and maximum size and see the results in a live preview. If you are done with the filtering, you can then write the new box selection into new box files. The video below shows an example.
The evaluation tool allows you, based on your validation micrographs, to get statistics about the success of your training.
To understand the outcome, you have to know what precision and recall is. Here is good figure from wikipedia:
If your validation micrographs are not labeled to completion the precision value will be misleading. crYOLO will start picking the remaining 'unlabeled' particles, but for statistics they are counted as false-positive (as the software takes your labeled data as ground truth).
If you followed the tutorial, the validation data are selected randomly. A run file for each training is created and saved into the folder runfiles/ in your project directory. These runfiles are .json files containing information about what micrographs were selected for validation. To calculate evaluation metrics select the evaluation action.
The html file you specified as output looks like this:
The table contains several statistics:
If the training data consist of multiple folders, then evaluation will be done for each folder separately. Furthermore, crYOLO estimates the optimal picking threshold regarding the F1 Score and F2 Score. Both are basically average values of the recall and prediction, whereas the F2 score puts more weights on the recall, which is in cryo-EM often more important.