Table of Contents

SPHIRE Known Issues

VERSION : Beta_20161216

RELEASE DATE : 2016/12/16

GENERAL

Wiki page for Prepare Input Stack is missing.

COMMAND : sxunblur.py

VERSION : Beta_20161216

DATE : 2016/12/14

Tutorial refers to the Wiki page for Prepare Input Stack, but it does not exists. The page should provide detailed instructions of how to prepare input stack from single particle dataset that was created by a different program.

MOVIE

Micrograph movie alignment is missing the MPI support.

COMMAND : sxunblur.py

VERSION : Beta_20161216

DATE : 2016/12/14

At this point, it is not possible to run the Micrograph movie alignment using multiple MPI processes.

Advanced usage of Drift Assessment tool should be described on our wiki page.

COMMAND : sxgui_unblur.py

VERSION : Beta_20161216

DATE : 2016/12/14

The user manual for the Drift Assessment tool is missing from SPHIRE Wiki. It should describe the advanced usage. Tutorial should have a link to this page.

CTER

CTF Estimation requires the number of MPI processors to be lower than the total number of micrographs.

COMMAND : sxcter.py

VERSION : Beta_20161216

DATE : 2016/11/30

The program will abort the execution if the number of MPI processors exceeds the total number of micrographs.

The wiki page for advanced usage of CTF Assessment tool is missing.

COMMAND : sxgui_cter.py

VERSION : Beta_20161216

DATE : 2016/12/14

The user manual for CTF Assessment tool is missing from SPHIRE Wiki. It should describe the advanced usage. Tutorial should have a link to this page.

ISAC

ISAC crashes when the size of dataset is very large.

COMMAND : sxisac.py

VERSION : Beta_20161216

DATE : 2016/11/30

Because we are still optimising the parallelization of ISAC, the program will crash due to memory allocation error when the input dataset is rather large. On our cluster with 128 Gb RAM and 24 cores per node, ISAC jobs crash with this particle box size when the datasets contain > 60000 particles.

For now, a workaround is to split the data in subsets, run ISAC as described here for each subset separately and combine the results at the end. For example, to split a dataset of 200.000 particles in 4 subsets, type at the terminal:

e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_1 –first=0 –last=50000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_2 –first=50001 –last=100000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_3 –first=100001 –last=150000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_4 –first=150001 –last=200000

To combine the resulting “clean” stacks at the end into a single virtual stack, type (one line):

e2bdb.py bdb:Particles#stack_clean1 bdb:Particles#stack_clean2 bdb:Particles#stack_clean3 bdb:Particles#stack_clean4 –makevstack=bdb:Particles#stack_clean1

VIPER

RVIPER requires the number of MPI processors to be lower than the number of class averages.

COMMAND : sxrviper.py

VERSION : Beta_20161216

DATE : 2016/11/30

The program will crash if the number of MPI processors exceeds the number of class averages in your input file.

Resize/Clip VIPER Model should be one step instead of two sperate steps.

COMMAND : N/A

VERSION : Beta_20161216

DATE : 2016/12/14

Resize/Clip VIPER Model should be one step instead of two sperate steps. The step should also allow user to remove disconnected density, apply low-pass filter, and generate a 3D mask.

MERIDIEN

MERIDIEN supports only 15° or 7.5° for initial angular sampling step.

COMMAND : sxmeridien.py VERSION : Beta_20161216 DATE : 2016/12/15

For Initial angular sampling step, the default value of 15° is usually appropriate to create enough projections for the initial global parameter search for almost every asymmetric structure (i.e. c1). However, if the structure has higher symmetry (e.g. c5), it is recommended to adjust this parameter to a lower value (7.5°). Currently, we support only these two starting values. Choosing another value is likely to create unexpected behaviour of the program

*** Settings of starting resolution and initial angular sampling step of MERIDIEN are related to each other.

Inappropriate combination of memory per node and MPI settings likely cause crash or performance deterioration of 3D Refinement.

COMMAND : sxmeridien.py

VERSION : Beta_20161216

DATE : 2016/11/30

If a combination of memory per node and MPI settings is inappropriate for your cluster and the dataset size, the program will most likely crash (if the memory per node is too high) or will be forced to use small memory mode (if the memory per node is too low), which results in performance deterioration.

Please check your cluster specifications. The program has to know how much memory available on each node as it uses “per node” MPI parallelisation in many places. Nodes are basic units of a cluster and each node has a number of CPUs (with few exceptions of heterogeneous clusters whose use should be avoided, number of CPUs is the same on each node). While clusters are often characterized by the amount of memory per CPU, here we ask for the total amount of memory per node as the program may adjust internally the number of CPUs it is using. For example, a cluster that has 3 GB memory per CPU and 16 CPUs per node has 3 GB × 16 = 48 GB memory per node. The default value used by the program is 2 GB per node and the program will determine internally the number of CPUs to arrive at the estimate of total memory.

Especially, the final reconstruction stage is very memory intensive. At this stage, the program will crash if sufficient memory is not available. In this case, please try to reduce the number of MPI processes per node while using at least 4 or more nodes, and do the continue run from the last iteration.

In case the program does not finalize even if you use one process per node, the only alternative in this case would be to downscale your particles (and your reference volume) to a lower pixel size and re-run the program in a different output folder. This can be done with the following command:

e2proc2d.py bdb:mystack bdb:mybinnedstack –scale=(scaling factor) –clip=(new box size)

On our cluster, with 128 Gb/node, for reconstructions of datasets with a box-size of 512, we had to reduce the number of processes per node from 24 to 6, but binning was not necessary.

The convolution effects of masking affects the resolution estimated by Sharpening.

COMMAND : sxprocess.py –postprocess

VERSION : Beta_20161216

DATE : 2016/11/30

With the present version of Sharpening, the resolution estimation might be affected by the convolution effects of over-masking because phase randomization of the two half-reconstructions is not performed.

Thus, please be cautious and avoid tight mask. Check your FSC carefully and create a larger mask in case you obtain strange B-factor values and/or observe strange peaks or raises at the high frequencies of your FSC. Such issues are nicely described in (Penczek 2010). In case you want to measure the local resolution of a specific area of your volume, instead of using local masks to calculate the respective FSC, use our local resolution and filtering approach instead. You should always visually inspect the resulting map and FSC carefully and confirm that the features of the density agree with the nominal resolution (e.g. a high resolution map should show clearly discernible side chains).

SORT3D

Current MPI implementation of 3D Clustering RSORT3D is still under optimisation.

COMMAND : sxrsort3d.py

VERSION : Beta_20161216

DATE : 2016/11/30

Especially, the scalability of 3D Clustering RSORT3D is not optimised. Using a very large number of CPUs slows down processing speed and causes huge spikes in the network communication. In this case, please try a less MPI processes.

Current MPI implementation of 3D Clustering RSORT3D is still under optimisation.

COMMAND : sxrsort3d.py

VERSION : Beta_20161216

DATE : 2016/11/30

LOCALRES

Currently, there is no output directory nor standard output of Local Resolution and 3D Local Filter.

COMMAND : sxlocres.py and sxfilterlocal.py

VERSION : Beta_20161216

DATE : 2016/11/27

Currently, there is no output directory nor standard output of Local Resolution and 3D Local Filter

Instead, user should be able to specify the output directory path, the command should automatically create the specified directory if it does not exist. If the directory already does exists, abort the execution. In addition, it should give some feedback at least when the the process is done.

UTILITIES

In Angular Distribution, setting pixel size to 1.0[A] fails with error.

COMMAND : sxprocess.py –angular_distribution

VERSION : Beta_20161216

DATE : 2016/11/07

Setting pixel size to 1.0[A] fails with error.

sxprocess.py –angular_distribution 'pa06a_sxmeridien02/main001/params_001.txt' –box_size=480 Traceback (most recent call last):

File "/work/software/sparx-snr10/EMAN2/bin/sxprocess.py", line 1242, in <module>
  main()
File "/work/software/sparx-snr10/EMAN2/bin/sxprocess.py", line 1238, in main
  angular_distribution(inputfile=strInput, options=options, output=strOutput)
File "/work/software/sparx-snr10/EMAN2/lib/utilities.py", line 7230, in angular_distribution
  0.01 + vector[2] * options.cylinder_length

ZeroDivisionError: float division by zero

Currently, no standard output is available from Angular Distribution.

COMMAND : sxprocess.py –angular_distribution

VERSION : Beta_20161216

DATE : 2016/11/07

At least, the program should give some feedback when the the process is done.