4 |
Out of memory error when running cryodrgn downsample
|
The particle stack is too large to fit into memory |
Add the --chunk flag to the cryodrgn downsample command |
5 |
Back projection is noisy, discontinuous, or does not resemble consensus refinement |
Incorrect pose and CTF metadata were supplied, or pose and CTF parameters were incorrectly mapped to particles. Noisy maps may also result from using a small number of particles used in the back projection (default: 10,000). Users may also have not applied the correct --uninvert-data convention, which determines whether the data is light-on-dark or dark-on-light |
Verify that the correct pose and CTF parameters were supplied during parsing and that the particle stack originated from, and contains the same particle index/order as the pose and CTF parameter metadata. If the volume is very noisy, re-run cryodrgn backproject_voxel with a larger number of particles using --first flag. Check whether the correct --uninvert-data convention for the dataset is followed by running cryodrgn backproject_voxel with and without --uninvert-data
|
6, 18 |
Out of memory error shortly after starting cryodrgn train_vae
|
The particle stack is too large to preload into memory |
Append --lazy to the cryodrgn train_vae command to allow on-the-fly image loading, further downsample particles, or train on a subset of the particle stack. |
6, 18 |
CUDA out of memory error during cryodrgn train_vae
|
Batch size may be set too large for your GPU’s memory capacity |
Manually decrease batchsize with the --batchsize flag in the cryodrgn train_vae command |
6, 18 |
Assertion error during cryodrgn train_vae similar to assert (coords[…,0:3].abs() - 0.5 < 1e-4).all()
|
Infrequent issues with numerical instability using --amp may cause this assertion to fail |
Restart cryodrgn train_vae without --amp
|
7, 9, 19, 22–23 |
Volumes generated after training appear non-continuous or hollow in the center of the box |
Users failed to apply correct --uninvert-data flag |
Run cryoDRGN backproject_voxel with and without the --uninvert-data flag and determine which convention is applicable, then re-run cryodrgn train_vae as necessary |
7, 9, 19, 22–23 |
Volumes generated after training all resemble junk |
Volumes may be displayed at too permissive an isosurface. Alternatively, data may have been parsed incorrectly in the preprocessing steps. |
Increase the isosurface threshold for display. Run cryodrgn backproject_voxel to determine whether poses and CTF parameters were correctly parsed. |
7, 9, 19, 22–23 |
Volumes generated after training all appear homogeneous |
For datasets other than EMPIAR-10076, this may be caused by too much upstream filtering prior to cryoDRGN training. |
Restart cryoDRGN training with an unfiltered dataset. |
All |
Jupyter notebooks aren’t behaving as described in the protocol |
Cells may have been run out of order or may reference outdated variables |
Restart the kernel and run the notebook again in order from top to bottom |