Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2018 Dec;204(3):543–554. doi: 10.1016/j.jsb.2018.09.008

Fast multiscale reconstruction for Cryo-EM

Laurène Donati a,, Masih Nilchian a, Carlos Oscar S Sorzano b, Michael Unser a
PMCID: PMC7343242  PMID: 30261282

Abstract

We present a multiscale reconstruction framework for single-particle analysis (SPA). The representation of three-dimensional (3D) objects with scaled basis functions permits the reconstruction of volumes at any desired scale in the real-space. This multiscale approach generates interesting opportunities in SPA for the stabilization of the initial volume problem or the 3D iterative refinement procedure. In particular, we show that reconstructions performed at coarse scale are more robust to angular errors and permit gains in computational speed. A key component of the proposed iterative scheme is its fast implementation. The costly step of reconstruction, which was previously hindering the use of advanced iterative methods in SPA, is formulated as a discrete convolution with a cost that does not depend on the number of projection directions. The inclusion of the contrast transfer function inside the imaging matrix is also done at no extra computational cost. By permitting full 3D regularization, the framework is by itself a robust alternative to direct methods for performing reconstruction in adverse imaging conditions (e.g., heavy noise, large angular misassignments, low number of projections). We present reconstructions obtained at different scales from a dataset of the 2015/2016 EMDataBank Map Challenge. The algorithm has been implemented in the Scipion package.

Keywords: Single-particle analysis, Multiscale representation, Fast iterative reconstruction, Full 3D regularization

1. Introduction

Single-particle electron microscopy aims at characterizing the three-dimensional (3D) structure of proteins at the atomic level (Dubochet et al., 1988, Frank, 2006, Orlova and Saibil, 2011, Milne et al., 2013, Cheng et al., 2015, Fernandez-Leiro and Scheres, 2016). It takes advantage of cryo-electron microscopy (cryo-EM) to image, with nearly parallel electron rays and at cryogenic temperatures, the projection of numerous replicates of a macromolecule, each with its unknown orientation and position. After data acquisition, one produces a high-resolution 3D reconstruction by processing the large set of projection measurements with advanced algorithms, available from a variety of single-particle analysis (SPA) packages (Frank et al., 1981, Sorzano et al., 2004, Tang et al., 2007, Grigorieff, 2007, Hohn et al., 2007, Scheres, 2012, de la Rosa-Trevín et al., 2016, Punjani et al., 2017).

The reconstruction necessitates the estimation of the unknown pose of the particle replicates, which is challenging because the acquired measurements are typically extremely noisy and blurred by microscopy-related effects. Most packages perform the reconstruction task through a so-called 3D iterative-refinement procedure during which information is gradually added to a rough initial volume. In particular, the projection matching approaches refine an initial volume by alternating between reconstruction and estimation of the pose parameters (Penczek et al., 1994). The first rough estimate of the 3D structure is computed from high-SNR class averages—a complicated task in itself due to the potential conformational heterogeneity of the sample. Then, from this first reference volume, a collection of equally distributed projections is produced (reference projections) and used to estimate the projection direction of clusters of projection measurements by appropriate angular-assignment methods (Carazo et al., 2015). The process is then repeated with an increasing number of distinct projection classes until the optimization fulfills some convergence criterion.

Although multiple improvements have made 3D iterative-refinement procedures more reliable over the years (Scheres and Chen, 2012, Henderson, 2013), the overall algorithmic process remains non trivial. The presence of heavy noise on nearly identical projections, their low contrast, the conformational heterogeneity of molecular complexes, the unknown projection directions, the finite number of measurements, and the incomplete knowledge of the imaging process all cause the determination of the 3D structure to be a highly ill-posed inverse problem that may also suffer from overfitting. Moreover, the convergence of the global process depends heavily on the quality of the initial reconstruction (Sorzano et al., 2006, Henderson et al., 2012).

1.1. Reconstruction algorithms

In most instances, the reconstruction at every iteration of the refinement pipeline is carried out independently of the angular assignment. The reconstruction is usually performed with direct Fourier-reconstruction (DFR) methods based on the central-slice theorem (Penczek et al., 2004, Abrishami et al., 2015). Particularly popular are Fourier regridding methods, which use interpolation kernels in the Fourier domain to bring irregularly distributed samples onto a regular grid (Carazo et al., 2015, Abrishami et al., 2015). DFR methods have provided satisfactory results in a number of applications and their speed is a key advantage. Yet, their performance can be limited in certain adverse imaging situations.

A more sophisticated and robuster solution to the reconstruction task is to formulate it as a regularized inverse problem that is solved iteratively (Sorzano et al., 2017, Gordon et al., 1970, Marabini et al., 1998, Sigworth et al., 2010, Li et al., 2011). Some approaches also take into account the blurring of each projection by the contrast transfer function (CTF) of the microscope (Zhu et al., 1997, Penczek et al., 1997). Those iterative methods permit high-quality reconstruction but require very large computational resources, which strongly limits their applicability to SPA.

Most SPA reconstruction algorithms, whether direct or iterative, offer the possibility to adjust the resolution of the reconstructed volumes. However, to the best of our knowledge, no existing method permits reconstruction of volumes at different scales. This hinders the use of multiscale approaches that very effectively solve similar ill-posed inverse problems (Adelson et al., 1984). For the sake of clarity, we detail the benefits of multiscale approaches in a dedicated section.

1.2. Contributions

In this work, we propose a multiscale reconstruction framework for SPA. We represent 3D objects with scaled basis functions to reconstruct volumes at any desired scale in the real space. The controlled dilation of the basis functions gives us the possibility to adjust the scale of the representation to the quality of the measurements.

The reconstruction task itself is formulated as a regularized optimization that is solved iteratively. To make the use of such an iterative method finally feasible in SPA, we introduce a fast formulation for the costly step of the reconstruction. The cost of this operation does not depend on the number of projection directions, which results in a substantial speed up. Moreover, the inclusion of the CTF inside the reconstruction is done at no extra computational cost.

This multiscale reconstruction tool generates interesting opportunities for the stabilization of the initial volume problem or the 3D iterative refinement procedure. In particular, we show that reconstructions performed at coarse scale are robuster to angular errors and lead to gains in computational speed. We present reconstructions obtained at different scales from a dataset of the 2015/2016 EMDataBank Map Challenge.

The paper is structured as follows: The principles behind our multiscale framework and its relevance in the context of SPA are explained in Section 2. We detail the iterative reconstruction scheme implementing our fast multiscale framework in Section 3. The results are presented in Section 4 and discussed in Section 5.

1.3. Notations

Depending on the context, we write a continuous function f,f(·) or f(x) where x=(x1,,xd)Rd. We shall either consider d=3 (objects in the spatial domain) or d=2 (objects in the projection domain). The spaceholder (·) allows us to define mappings in a more compact way, e.g., f(·/s)xf(x/s). Sequences are denoted by c or c[k] with k=(k1,,kd)Zd. Vectors are denoted by bold lowercase letters (e.g., c) and matrices by bold uppercase letters (e.g., H).

The p-norm of a vector c=(c1,,cN)RN is defined as ||c||p=n=1N|cn|p1p. In this work, we shall only consider p=1 and p=2. The spaces of finite energy sequences and functions are denoted by 2(Zd) and L2(Rd), respectively. The continuous convolution is written as (fg)(x)=Rdf(τ)g(x-τ)dτ. We make the distinction with the discrete convolution, denoted by (cd)[k]=lZdc[l]d[k-l]. The Fourier transform of f is f^. We denote the reflection of a function as f(x)=f(-x).

2. Multiscale framework

2.1. Multiscale for solving ill-posed problems

The idea behind multiscale processing is to process signals over a certain range of scales when executing multi-steps procedures. An advantage is that operations performed at coarse scale are usually robuster and permit gains in computational speed (Unser and Aldroubi, 1996). They come useful when (1) only incomplete and degraded information is available as input, and (2) a low-resolution output is acceptable for further processing. A benefit is that this robustness of the coarse initial process can positively impact the convergence of all subsequent steps in the procedure.

This observation is the motivation behind the so-called pyramid approaches (Adelson et al., 1984) that solve ill-posed optimizations iteratively using multiscale representations of volumes. Several works have successfully used pyramid processing for handling strongly ill-posed optimization problems with abundant local minima in biomedical imaging (Unser and Aldroubi, 1996, Thévenaz et al., 1998, Dengler, 1989, Desco et al., 2001). This approach has also been favorably used in alternate minimization frameworks, for example in blind deconvolution works (Fergus et al., 2006, Ruiz et al., 2015).

Multiscale-based approaches have already been used to improve angular-assignment procedures (Saad et al., 2000, Sorzano et al., 2004). In Sorzano et al. (2004) used a coarse-to-fine discrete wavelet transform to compute the correlation between the measurements and the reference projections. Indeed, it is sufficient to indicate that the projections come from different orientations when they do not match at coarse scale. If, however, they do match at coarse scale, then the analysis is pursued at finer scale. This multiscale wavelet-space matching algorithm provided a gain both in speed and in robustness for the angular-assignment procedure.

Several conditions specific to SPA further advocate for the use of pyramid-like approaches for the reconstruction itself. For example, it has been shown that the alignment of cryo-EM data can be done accurately with mere low-frequency data (Henderson et al., 2011) and that the determination of the pose parameters essentially depends on low-resolution information (Scheres and Chen, 2012). Thus, a coarse representation of volumes is more desirable at early stages of the global iterative reconstruction process. Indeed, its resolution proves sufficient for further processing while it remains robust to the incomplete information (i.e., few blurred class averages with unknown projection directions).

2.2. Proposed multiscale representation for SPA

We now describe how we represent a scaled 3D object within a generalized sampling framework (Unser, 2000). In this scheme, the discrete representation of a continuous object f(x),xR3, can be interpreted as the coefficients of some appropriately shifted basis functions that specify a particular reconstruction space. Simply put, the generalized sampling framework tells us how to properly characterize a continuous function with a sequence of discrete coefficients.

The important aspect in our case is to consider the scaled reconstruction space

Vs(φ)=fs(x)=kZ3cs[k]φs(x-sk):cs2Z3 (1)

specified by the scaled basis function

φs(x)=φ(x/s)L2(R3), (2)

where s>0 denotes the scaling parameter.

The coefficient sequence cs corresponds to the discrete s-scaled representation of the object fs in the space Vs(φ). In practice, this sequence is restricted to a finite number of coefficients as the object fs is compactly supported. We write this vector of coefficients as cs=cs[k]kΩ3Ds. Here, the set Ω3Ds corresponds to the support of the coefficients required to represent the object fs.

These coefficients cs are the ones used in practice for the reconstruction procedure (see Section 3.1). Once the optimization is performed, the obtained coefficients are then re-expanded in the space Vs(φ) through (1) to obtain the scaled representation of the reconstruction fs. To the best of our knowledge, such a multiscale reconstruction scheme based on generalized sampling theory has not been proposed in SPA so far.

A suitable choice for the basis function φ in (2) is the optimized Kaiser-Bessel window function (KBWF) (Lewitt, 1990, Nilchian et al., 2015) defined as

φ(x)=1-||x||a2m2Imα1-||x||a2Im(α)if0||x||a,0otherwise. (3)

The KBWF depends on three parameters: (1) the order m of the modified Bessel function Im, (2) the window taper α which determines the shape of the KBWF, and (3) the support radius a which controls the smoothness of the KBWF. It was shown in Nilchian et al. (2015) that a KBWF represents functions very effectively when using specific parameter values (e.g., m=2,α=10.83,a=2). Moreover, its isotropic property allows for a significant reduction in computational costs, as we shall later illustrate in Section 3.

The scaling parameter s in (2) is the central element of our multiscale representation. It dilates the basis function φs and thus controls the coarseness of the representation of f. A large value of s enforces a coarse (“low-resolution”) volume, while a small value imposes a fine (“high-resolution”) representation. We illustrate this in the Fourier-shell correlation (FSC) curves of Fig. 1, where the reconstructions become coarser as the scaling parameter increases. For the simulation, numerous (2000) noiseless projections with equi-distributed directions were produced, as to focus solely on the effect of the scaling parameter on the coarseness of the representations. Increasing the scale by factors of two gradually restricts the information by halves into the low-frequency region.

Fig. 1.

Fig. 1

Impact of the controlled dilatation of the basis function by the scaling parameter s>0 on the reconstructed volume. The scale is increased from left (s=1) to right (s=8), with intermediate values s=2 and s=4. (a-d) Optimized isotropic KBWF φs (with a=2,α=10.8,m=2) are dilated by increasing s. (e-h) Central orthogonal slices of the (256×256×256) beta-galactosidase reconstructed with the proposed approach at different scales. Volumes are re-expanded on a fine grid through Eq. (1) to allow for a comparison. (i-l) Corresponding FSC curves.

Finally, the scale s also influences the size of the set of coefficients cs. More precisely, the number of coefficients decreases as the scale increases. Therefore, the scaling also strongly impacts the speed of the reconstruction algorithm as the procedure is then performed on significantly less samples.

3. Fast iterative reconstruction

We detail here the iterative reconstruction scheme implementing our fast multiscale framework.

3.1. Imaging model with multiscale representation

Let Pθp{f}(y) with yR2 denote the X-ray transform of the atomic density f for the pth 3D particle with orientation θp (Euler angles) (Natterer, 2001). In cryo-EM, this entity is typically blurred by a point-spread function (PSF) wp. We model the noiseless 2D cryo-EM measurements b~p(y) of the pth particle as

b~p(y)=(Pθp{f}wp)(y). (4)

We use the generalized sampling framework (1) to discretize this forward model. We thus consider the s-scaled representation fs of the atomic density of interest f. Using the linearity and pseudo-shift-invariant properties of the X-ray transform (Natterer, 2001), we rewrite (4) as

b~p(y)=(Pθp{fs}wp)(y) (5)
=kΩ3Dscs[k]Pθp{φs}wp(y-sMθpk). (6)

There, the hyperplane projection matrix MθpR2×3 has rows that specify the normal basis of the hyperplane perpendicular to the direction θp of integration.

The measurements b~p(y) for the pth particle are assumed to be acquired at the sampled points yj=jΔ for jΩ2D. Here, the set Ω2D denotes the support of the projection of fs. For the sake of clarity, we consider Δ=1 and we note b~p the discrete noiseless measurement vector for the pth particle. This gives us the entries of the imaging matrix Hsp as

[Hsp]j,k=Pθp{φs}wp(j-sMθpk) (7)

with jΩ2D and kΩ3Ds.

In practice, each measurement b~p is corrupted by a substantial additive Gaussian noise np (Sorzano et al., 2004). With this degradation, we finally obtain the discrete formulation of the complete forward model with P particles as

b=Hscs+n, (8)

where

b=b1b2bP,Hs=Hs1Hs2HsP,andn=n1n2nP. (9)

3.2. Reconstruction scheme

The task at hand is to reconstruct the scaled 3D volume fs that best explains the complete vector b of 2D measurements in (8). We assume here that we have existing estimates (however inaccurate they may be) of the pose θp and the PSF wp for every particle p={1,,P}. In our discrete formulation, the reconstruction consists in finding the coefficients cs through a regularized optimization scheme (Candès et al., 2006),

cs=argmincsC12||Hscs-b||22+λΨ(cs), (10)

where Ψ is the regularization term used to inject prior knowledge into the reconstruction process, while C is a given convex constraint (e.g., positivity constraint).

The regularized formulation given by (10) is a classical and successful way of solving ill-posed inverse problems. It promotes the solution cs that minimizes a combination of (1) the data-fidelity term ||Hscs-b||22 that forces the solution to be consistent with the imaging model Hs and the measurements b, and (2) the regularization term Ψ(cs) that requires the solution to fulfill certain priors. The parameter λ>0 controls the strength of this regularization.

We consider here that Ψ(cs)=||Lcs||1, where L is some regularization matrix. We shall assume henceforth that L is the gradient transform. This regularization is known as total variation (TV). It is a popular edge-preserving prior that is well suited to many applications (Rudin et al., 1992).

To minimize (10), we deploy an optimization routine called the alternating-direction method of multipliers (ADMM) (Boyd et al., 2010, Ramani and Fessler, 2012). For the sake of clarity, the detailed mathematical formulation of the algorithm is given in Appendix A. A key advantage of ADMM is that it splits the original problem into simpler optimization subproblems, which accelerates the convergence of the algorithm.

In our case, the ADMM sub-solvers of (10) are given by

csk+1=argmincsL(cs,uk,αk) (11)
uk+1=argminuL(csk+1,u,αk) (12)
αk+1=αk+μ(Lcsk+1-uk+1), (13)

where u is an auxiliary variable, α is the vector of Lagrange multipliers, and the function L corresponds to the augmented Lagrangian of (10) given by

Lcs,u,α=12||Hscs-b||22+λ||u||1+αT(Lcs-u)+μ2||Lcs-u||22. (14)

The solution of (10) is therefore obtained by alternating between the minimizations (11), (12), (13) for a given number of iterations.

Eq. (12) admits a fast explicit solution through a soft-thresholding operation (Combettes et al., 2011):

uk+1=signLcsk+1-αk·max0,|Lcsk+1-αk|-(λ/μ). (15)

Likewise, (13) is a simple parameter update. On the other hand, to compute the coefficient csk+1 in (11), we have to minimize the augmented Lagrangian Lcs,u,α, whose gradient with respect to cs is

csL(cs,uk,αk)=(HsTHs+μLTL)Acs-HsTb+μLT(uk-αkμ)d. (16)

We do this by solving

csLcsk+1,uk,αk=Acs-d=0, (17)

iteratively using a conjugate-gradient (CG) method since the matrix A is badly conditioned (Boyd and Vandenberghe, 2004).

From a computational point-of-view, this minimization is the real bottleneck of the global optimization scheme. Indeed, it imposes the matrix-multiplication of HsTHs with the updated cs vector at every ADMM iteration. If not carefully engineered, this operation comes at a heavy computational cost and makes the use of iterative algorithms out of practical reach for SPA.

3.3. Fast implementation of HsTHscs

In this section, we provide an alternate mathematical formulation of HsTHscs that greatly reduces the computational cost of the operation. In our opinion, this contribution is by itself significant in the context of SPA, as it makes the use of advanced iterative algorithms conceivable in the field.

As a preliminary, let us define a condition on basis functions that is required for further calculations.

Definition 1 Radial Nyquist criterion —

The function φL2(R3) satisfies the radial Nyquist criterion with respect to the grid Z3 if φ^(ω)=0 for all ||ω||π.

A function therefore satisfies the radial Nyquist criterion (RNC) if its Fourier transform is zero outside of a ball of radius π. Note that, if φs0(x)=φ(x/s0) satisfies the RNC, then so does φs(x)=φ(x/s) for all ss0.

We then have the result of Theorem 1 on the fast computation of HsTHscs for objects that lie in the 3D space.

Theorem 1

Let φs(x)=φ(x/s), with xR3 and s>0, be such that it satisfies the radial Nyquist criterion for all ss0. Moreover, let the imaging matrix Hs be as defined in (9) and PN denote the number of particles. Then, for all ss0, the discrete product HsTHscs can be computed as the discrete convolution

[HsTHscs]k=(csrs)[k] (18)

for kΩ3Ds, with kernel

rs[k]=|s|6p=1PPθp{φ}Pθp{φ}q1/sp(Mθpk). (19)

Here, the function q1/sp(y)=(wp(wp))(sy) with yR2 corresponds to the scaled auto-correlation function of the PSF wp(y).

The proof of Theorem 1 is given in Appendix B.

The benefit is that the costly step HsTHscs can now be quickly computed as a pointwise multiplication in the Fourier domain, with a cost that does not depend on the number of projection directions.1 Moreover, if the basis function φs is isotropic, then the costs are further reduced as the autocorrelation of Pθp{φ} needs to be computed only once.

In practice, the discrete convolution (csrs) in (18) only needs to be computed for kΩ3Ds. We do this by convolving a padded cs with a kernel of finite support. This finite kernel rs is obtained by first convolving the autocorrelation function of Pθp{φ} with the scaled autocorrelation function of the PSF wp, and then interpolating its value in the sampling points Mθpk.

Note that if no PSF is considered (i.e., wp is the Dirac distribution δ for all particles), the kernel reduces to

rs[k]=|s|4p=1PPθp{φ}Pθp{φ}(Mθpk). (20)

The case when wp=δ and s=1 (i.e., unscaled reconstruction) was used in X-ray tomography with fixed rotation axis by McCann et al. (2016).

3.4. Fast implementation of HsTb

Eq. (17) also requires the computation of the discrete product HsTb. Even though it only needs to be computed once during the whole reconstruction process, it can also be costly in its own rights. We present here its fast formulation.

Theorem 2

Let φs(x)=φ(x/s) with xR3 and s>0 be such that it satisfies the radial Nyquist criterion for all ss0. Moreover, let the imaging matrix Hsp be as defined in (7), the measurement vector b be as defined in (8), and PN denote the number of particles. Then, for all ss0, the matrix–vector product HsTb can be computed as

[HsTb]k=p=1PbpPθp{φs}(wp)(sMθpk) (21)

for kΩ3Ds.

The proof of Theorem 2 is given in Appendix C.

The product HsTb can thus be obtained by computing the convolution in (21) on a fine grid, which significantly reduces the cost of the operation. Another benefit is that the computation of HsTb can be easily parallelized over the set of particles.

3.5. Computational cost

We assume that the task is to reconstruct an image of size N3 from a set of P measurements of size M2. We also consider the support of our (unscaled) basis function to be W3 and the scale of the basis function along each dimension to be s. For simplicity, we do not consider the cost of CTF inclusion here.

The computation of HsTb through Theorem 2 requires a discrete convolution, which we perform via 2D fast Fourier transforms (FFT), with a cost in the order of O(PM2log(M)). We store the result in a lookup table; then, each object point comes at a cost of O(PN3/s3). The computation of HsTHs kernel through Theorem 1 requires the autocorrelation of the projection of a non-scaled basis function, which is precomputed and stored in a lookup table. Since we are using basis functions with isotropic properties, this lookup table is short. We then interpolate the kernel in the object domain, which implies a cost of O(PW2N). Finally, the convolution of the HsTHs with the current coefficient sequence only requires two FFT. This comes at a cost of O(N3log(N)/s3) at every ADMM iteration.2

Whether the cost of our approach is comparable to that of DFR methods depends on the specific experimental conditions, especially on the scale desired for the reconstruction. In particular, the coarser the representation of the image, the quicker the reconstruction process. For example, when the reconstruction is performed at scale s=4, the cost of computing HsTHscs is reduced by a factor 64 compared to fine-scale reconstruction with s=1. This is a massive computational gain.

3.6. Algorithm

A pseudo-code of the proposed reconstruction algorithm is provided in the Algorithm 1 box. The algorithm has been implemented in MATLAB® and is usable in Scipion. The algorithm can handle mirror projection images and correct for shifts. Symmetry of macromolecules is also taken into account in our implementation.

As inputs, the algorithm requires the projection measurements and the scale desired for the reconstruction. Optionally, the CTF information of each particle can be given as an input. This inclusion of the CTF is relevant only for reconstructions at the finest scale (s=1). Indeed, reconstructions at coarser scales (s>1) do not require high-frequency information correction (see Fig. 1, bottom line). Another (more practical) reason for this optional inclusion is that some SPA packages pre-correct the CTF effect on the measurements prior to reconstruction.

Our ADMM-based algorithm depends on three parameters: the regularization parameter λ, the number of ADMM iterations nA (i.e., the outer iterations) and the number of CG iterations nC (i.e., the inner iterations).

During our experiments, we have observed that the choice of these parameters tended to be very robust for coarse scales (s>1). A wide interval of λ was thus yielding similar and satisfactory results; From those, we have selected a default value of λ=100. Note that this value may vary to some degree from a dataset to another. The number of inner and outer iterations also required little tuning. Throughout our experiments, we have used nA=30 and nC=7 for all scales.

Algorithm 1: Fast iterative reconstruction at scale s

graphic file with name fx1.gif

At finest scale (s=1), an appropriate value of λ can be found by running the thirty ADMM iterations with λ spanning a range of powers of ten (e.g., five values from λ=10-2 to λ=102). To do so, one does not need to recompute the kernel rs, nor the product HsTb, which makes the cost of this search acceptable. Finally, the penalty parameter μ in (14) is set proportional to λ and thus need not be tuned.

4. Experiments

We have evaluated the performance of our multiscale algorithm on both simulated and real datasets. In particular, we have explored how the scaling was impacting the quality of the reconstruction and influencing its robustness to errors on angular misalignments.

The experiments performed with simulated data are described in Sections 4.1 and 4.2. We present the results obtained with real data from the 2015/16 EMDataBank Map Challenge in Section 4.3.

4.1. Simulation conditions

For our simulations, we used as ground-truth a (128×128×128) β-galactosidase volume with 2.2Å resolution (Bartesaghi et al., 2014). We started by computing 20,000 randomly equi-distributed projections of the ground-truth (Deserno, 2004). We used the imaging matrix given by (7) with a KBWF as basis function (a=2,α=10.8,m=2) to compute these projections. Multiple experimental sets of data were generated by adding (1) Gaussian noise on the projection images such that their SNR is 1 and (2) angular error on their projection angles.

For every dataset, we clustered the projections in N distinct classes. The class representatives were N uniformly equi-distributed projections of the ground-truth (Deserno, 2004). Each projection was assigned to the class with the closest projection angles. Then, for every cluster, the projections were aligned to the reference image (by rigid transformation, using imregister in MATLAB®) and averaged. From these class averages, we reconstructed the 3D structures at different scales with our algorithm. Small cluster sizes (from 10 to 100) were considered, which mimics the processing conditions during the early refinement stages.

For the reconstruction, we used a different KBWF as basis function (a=4,α=19,m=2) to reduce the risks of inverse crime.3 We applied TV regularization and performed our optimization with thirty outer iterations and seven inner iterations. The regularisation parameter was selected from five powers of 10 by picking the best output in terms of FSC.

We did not consider CTF correction in those simulations to permit direct comparison among reconstructions obtained at different scales (see also Section 3.6). Note that the beneficial impact of CTF inclusion for producing high-resolution reconstructions has been established in previous works (Zhu et al., 1997, Penczek et al., 1997).

The accuracy of each reconstruction was evaluated by computing the FSC metric with respect to the ground-truth, as is the standard in SPA. When necessary, we considered a common threshold value at 0.5.

4.2. Robustness to angular misassignments

We have analyzed with simulated data how angular misassignments were influencing the quality of the reconstruction at scale s=1,2,3,4. The obtained results are presented in Fig. 2 and Fig. 3.

Fig. 2.

Fig. 2

Effect of the scale on the robustness to angular misassignment of β-galactosidase reconstruction. The reported resolution is the FSC resolution estimate at the threshold value of 0.5.

Fig. 3.

Fig. 3

FSC curves of β-galactosidase reconstructed at scale 3 from 20,000 projections that were clustered in a varying number of classes (n = 11,31,59,99) before averaging. The variance of the error on the projection angles is 15.47 degrees.

In Fig. 2, an increasing amount of error was added to the projection angles prior to clustering in 80 equi-distributed classes and averaging. Reconstruction was then performed at scale s=1,2,3,4 and the FSC at threshold value 0.5 was returned. The results indicate that robustness to angular misassignments becomes stronger as the scale is coarsened. Indeed, although reconstructing at fine scale (s=1) performs effectively when the error level is low, its performance quickly degrades when the angular error increases. This behavior is much less pronounced at coarser scales (s=2,3,4), which show better stability against angular errors.

We have also explored how the choice of the scale was influencing the quality of the reconstruction when only a very limited amount of data (i.e., 11 projection classes) was available. The results are presented in Fig. 3. There, a fixed amount of error (with var=15.47 degrees) was added to the projection angles prior to clustering. The results show that, when performing reconstruction with very few data, all output volumes (s=1,2,3,4) have a roughly similar information content. This is not surprising: At all scales, there is only a limited resolution that can be achieved with such blurred and incomplete data.

Overall, those results suggest that, when the error on the estimated angles is significant and the number of projection classes is low, one would actually benefit from reconstructing volumes at a coarser scales. Then, the frequency content is preserved and the reconstruction is robuster to angular misassignements. Moreover, the gain in computational speed at such scales can be substantial (see Section 3.5).

4.3. Real data from the EMDataBank map challenge

We have used our multiscale algorithm to reconstruct a real target from the EMDataBank Map Challenge: the T20S Proteasome. A total of 22,884 projection images were extracted from 196 micrographs. The experimental details of the acquisition are given in Campbell et al. (2015). We classified these images into 32 classes using CL2D Sorzano et al., 2010 and constructed an initial volume assuming a D7 symmetry with the algorithm described in Sorzano et al. (2015). We then applied five iterations of the reconstruct_highres protocol in Scipion, using the default reconstruction method. The parameters used for running those five iterations are described in a paper in this same issue (Sorzano et al., 2018).

We then performed a final iteration of the reconstruct_highres in Scipion using our multiscale reconstruction method with scales s=1,2,3,4. For each scale, reconstruct_highres automatically separated the projection set in two halves and performed one iteration of angular assignment and reconstruction on each half.

The reconstruct_highres algorithm was run with its default parameters in Scipion, except for the Post-Processing options that were all disabled. For the ADMM-based reconstructions, we used a unique set of parameters for all scales: λ=100,nA=30,nc=7. We applied TV regularization and imposed a positivity constraint during reconstruction. We did not apply any post-processing operation (e.g., denoising) after reconstruction. The results are displayed in Fig. 4, Fig. 5, Fig. 6, Fig. 7.

Fig. 4.

Fig. 4

Reconstructions at different scales (s=1,2,3,4) of the T20S Proteasome from the 2015/2016 EMDatabank Challenge.

Fig. 5.

Fig. 5

FSC curves for the T20S Proteasome reconstructions at different scales (s=1,2,3,4). For each scale, the curve was obtained by comparing reconstructions from two distinct halves of the projection set.

Fig. 6.

Fig. 6

Cross sections of T20S Proteasome reconstructions at different scales (s=1,2,3,4). Slices number from top row to bottom row: 75, 100, 125, 150, 175, 200. The yellow lines (top row) indicate the position where the intensity profiles displayed in Fig. 7 are measured.

Fig. 7.

Fig. 7

Profile lines taken on a cross-section of the T20S Proteasome reconstructions at different scales (s=1,2,3,4). The position of the measured pixels is indicated by the yellow lines in Fig. 6.

Fig. 4 presents the reconstructions obtained at the different scales. Several cross sections from these reconstructions are displayed in Fig. 6. These qualitative results illustrate how the scaling influences the coarseness of the reconstructions while preserving their key structural features.

FSC curves of the reconstructed volumes are presented in Fig. 5. For each scale, the curve was obtained by comparing the reconstructions from the two distinct halves of the total projection set. Those FSC results testify to the considerable robustness achieved by coarse reconstructions whose curves are almost constantly equal to one.

Finally, profile lines taken on a cross-section of the T20S Proteasome reconstructions at different scales are given in Fig. 7.

5. Discussion

Those results obtained on simulated and real data confirm the increase in robustness brought by reconstructing volumes at coarser scales. This is consistent with observations made by other multiscale approaches in various biomedical applications (Unser and Aldroubi, 1996).

For SPA, the benefits of a multiscale tool become clear when considering the initial volume problem. This problem refers to the computation of the first estimation of the 3D structure required for progressive refinement. The task is a highly challenging one, as the lack of angular information for performing reconstruction is then at its peak. There are thus abundant local minima that algorithms can be trapped into. Yet the importance of a robust first estimation cannot be overstated. Several works have indeed demonstrated that the nature of this initial structure can considerably affect the final reconstruction (Henderson et al., 2012, Sorzano et al., 2006).

In that sense, our multiscale reconstruction scheme might provide novel ways of stabilizing this highly ill-posed optimization task. A judicious approach could be to start the process with rather coarse reconstructions, which are more robust to error on projections angles and yet contain all necessary information for angular estimation (Henderson et al., 2011, Scheres and Chen, 2012). Then, one could repeat the process by slowly increasing the scale as the angular assignment is refined. Work is currently underway to quantify the increase of robustness brought by multiscale reconstruction to this initial volume problem.

It is essential to realize that the use of the proposed iterative method in SPA was made entirely possible by the fast formulation of our algorithm. Without these novel mathematical contributions, the applicability of our framework in single-particle electron microscopy would have been rather limited, as is currently that of most iterative reconstruction techniques.

In addition to its multiscale aspect, a key feature of our framework is its ability to inject prior information into the optimization. Through its regularization term, our scheme can be a reliable alternative for handling these reconstructions for which direct methods might fail to yield satisfactory results.

Finally, the proposed scheme opens the door for several other developments in SPA, such as the inclusion of novel constraints, a different handling of specimens with M-fold symmetries and the use of promising learning-based approaches (Tosic and Frossard, 2011, Jin et al., 2017).

6. Conclusion

We have presented a novel multiscale reconstruction framework for single-particle electron microscopy. By appropriately representing three-dimensional (3D) objects with scaled basis functions, one can now reconstruct volumes at any desired scale in the real space. To make the use of this iterative reconstruction scheme in single-particle analysis (SPA) feasible, we have introduced a fast formulation for the iterative refinement step. The costly step of the reconstruction, which was previously hindering the use of advanced iterative methods in SPA, can now be computed as a discrete convolution with cost independent upon the number of projection directions. This multiscale reconstruction tool was evaluated on both simulated and real data. In both cases, results have highlighted the increase in robustness brought by reconstructing volumes at coarser scales.

Acknowledgments

The work of Laurène Donati was supported by an ERC grant (ERC-692726-GlobalBioIm). The authors are grateful to Julien Fageot from the Biomedical Imaging Group for his constructive comments on mathematical notations.

Molecular graphics in Fig. 4 were performed with the UCSF Chimera package. Chimera is developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco (supported by NIGMS P41-GM103311) (Pettersen et al., 2004).

Footnotes

This Special Issue, edited by Catherine Lawson and Wah Chiu, highlights the outcomes of the recent Map and Model Challenges organized by the EMDataBank Project.

1

Note that although the RNC given in Definition 1 is a necessary condition for the result obtained in Theorem 1, practical tests have shown that the use of KWBF—a basis function that approximately verifies the RNC—had negligible impact on the quality of the computation of the HsTHscs product as a discrete convolution.

2

By comparison, standard direct methods based on Fourier regridding require that one 2D FFT be computed per projection (O(PM2log(M))), followed by some interpolation procedure. Assuming that the support of the interpolating function is similar to that of our basis function, it imposes a cost of O(PW3M2). One then needs to apply one 3D inverse Fourier transform to get back to the space domain, which comes at a cost of O(N3log(N)).

3

In inverse problems, an “inverse crime” occurs when one uses the same theoretical ingredients to simulate the measurements and to reconstruct the object of interest (Wirgin, 2004, Chavez et al., 2013). This may yield over-optimistic results and should be avoided. In our simulations, the use of different KBWF basis functions for (1) the computation of the projections and (2) the reconstruction of the 3D object strongly reduces this risk. Moreover, the evaluation of our algorithm on real data guaranties an unbiased evaluation of its performance from this perspective.

Contributor Information

Laurène Donati, Email: laurene.donati@epfl.ch.

Carlos Oscar S. Sorzano, Email: coss@cnb.csic.es.

Michael Unser, Email: michael.unser@epfl.ch.

Appendix A. ADMM-CG algorithm

We formulate our reconstruction task as

cs=argmincsC12||Hscs-b||22+λ||Lcs||1, (A.1)

where λ controls the strength of the regularization. We define the auxiliary variable u=Lcs and rewrite the optimization problem as the constrained optimisation problem (Ramani and Fessler, 2012, Boyd et al., 2010),

J(cs)=mincsRn,u=Lcs12||Hscs-b||22+λ||u||1. (A.2)

Its scaled augmented Lagrangian functional is given by

Lμcs,u,α=12||Hcs-b||22+λ||u||1+αT(Lcs-u)+μ2||Lcs-u||22, (A.3)

where α is the vector of Lagrange multipliers. ADMM is used to separate the optimization problem into simpler ones (Ramani and Fessler, 2012, Boyd et al., 2010),

csk+1=argmincsL(cs,uk,αk),(a)uk+1=argminuL(csk+1,u,αk),(b)αk+1=αk+μ(Lcsk+1-uk+1),(c). (A.4)

Eq. A.4(a) is a quadratic minimization with respect to cs with gradient

Lcs(cs,uk,αk)=(HsTHs+μLTL)cs-(HsTb+μLT(uk-αkμ)). (A.5)

The critical element of the cost functional is the root of the gradient function, which we find by using a conjugate-gradient method.

Eq. A.4(b) admits the fast explicit solution

uk+1=proxλμ(Lcsk+1-αk), (A.6)

where the proximal operator corresponds to a soft-thresholding operation (Combettes et al., 2011). The solution of A.4(b) is thus obtained with

uk+1=signLcsk+1-αk·max0,|Lcsk+1-αk-(λ/μ). (A.7)

Finally, A.4(c) corresponds to a simple update of the Lagrange parameter.

Appendix B. Proof of Theorem 1

As a preliminary, we provide in Proposition 3 a result on functions that satisfy the radial Nyquist criterion. This proposition will be needed in further proofs.

Proposition 3

For any pair of functions (f,g) that satisfy the radial Nyquist criterion, it holds that

nZ2f(n)g(n)=R2f(x)g(x)dx, (B.1)

where n=(n1,n2) and x=(x1,x2).

proof. Since the functions f and g satisfy the radial Nyquist criterion, we can apply Shannon’s theorem and expand them using the sinc functions to obtain that

f(x)=kf(k)sinc(x-k), (A.2)
g(x)=kg(k)sinc(x-k), (A.3)

where sinc(x)=sinc(x1)sinc(x2). The orthonormality of the sinc function and its shifts yields the desired results.

We now provide the proof of Theorem 1 for objects that lie in a 3D space. We recall that the vector cs is defined as cs=cs[k]kΩ3Ds, where cs is the sequence of coefficients and the set Ω3Ds corresponds to the support of the coefficients required to represent the object fs.

proof We rewrite for kΩ3Ds the entries of the discrete product HsTHscs as

HsTHscsk=p=1P(Hsp)THspcsk(i)=p=1PlZ3cs[l]jZ2Pθp{φs}wp(j-sMθpl)×Pθp{φs}wp(j-sMθpk)(ii)=p=1PlZ3cs[l]R2Pθp{φs}wp(y-sMθpk)×Pθp{φs}wp(y-sMθpl)dy(iii)=p=1PlZ3cs[l]R2Pθp{φs}wp(y)×Pθp{φs}wp(y-sMθp(l-k))dy(iv)=p=1PlZ3cs[l]Pθp{φs}wpPθp{φs}wp(sMθp(l-k))(v)=lZ3cs[l]p=1PPθp{φs}Pθp{φs}wp(wp)(sMθp(l-k)). (A.4)

We thus have that the discrete product HsTHscs can be computed as the discrete convolution

HsTHscsk=(csrs)[k], (A.5)

with kernel

rs[k]=p=1PPθp{φs}Pθp{φs}qp(sMθpk), (A.6)

where the function qp(y)=(wp(wp))(y) with yR2 corresponds to the autocorrelation function of the PSF wp(y).

Equality (i) derives from the definition of Hsp given by (7) and from the compact support of cs and Pθp{φs}. Equality (ii) results from Proposition 3, which can be invoked here as the function (Pθp{φs}wp)(n) with nZ2 verifies the radial Nyquist criterion. Indeed, as the basis function φs verifies the radial Nyquist criterion by hypothesis, Pθp{φs} also satisfies the radial Nyquist criterion through the central-slice theorem, and then so does Pθp{φs}wp through the convolution theorem. Equality (iii) is obtained through a simple change of variables, while Equalities (iv) and (v) use properties of the continuous convolution.

We then rewrite the kernel rs[k] given by (A.6) as:

rs[k]=p=1PPθp{φs}Pθp{φs}qp(sMθpk)(vi)=|s|2p=1PPθp{φ}(·/s)Pθp{φ}(·/s)qp(·)(sMθpk)(vii)=|s|4p=1P(Pθp{φ}Pθp{φ})(·/s)q1/sp(·/s)(sMθpk)(viii)=|s|6p=1PPθp{φ}Pθp{φ}q1/sp(Mθpk), (A.7)

where q1/sp(y)=q(sy). Equality (vi) is obtained after applying twice the scale-invariance property of the X-ray transform (s0)

Pθ{f(·/s)}(x)=|s|·Pθ{f(·)}(x/s). (A.8)

Equalities (vii) and (viii) are both derived by using a well-known result on the convolution of two scaled functions Pθ{f},Pθ{g}R2:

Pθ{f}(·/s)Pθ{g}(·/s)(x)=|s|Pθ{f}Pθ{g}(x/s). (A.9)

Appendix C. Proof of Theorem 2

We provide the proof for objects that lie in a 3D space.

proof We rewrite for kΩ3Ds the entries of the product HsTb as:

HsTbk=p=1P(Hsp)Tbpk(i)=p=1PjΩ2Dbp[j]Pθp{φs}(·-sMθpk)wp(j)(ii)=p=1PR2bp(y)Pθp{φs}(·-sMθpk)wp(y)dy(iii)=p=1PR2bp(y)R2Pθp{φs}(y-sMθpk)wp(y-y)dydy(iv)=p=1PR2Pθp{φs}(y-sMθpk)R2bp(y)wp(y-y)dydy(v)=p=1PR2Pθp{φs}(y-sMθpk)bp(wp)(y)dy(vi)=p=1PbpPθp{φs}(wp)(sMθpk). (A.10)

Equality (i) is obtained by applying the definition of Hsp given by (7). Equality (ii) results from Proposition 3 (see also Proof of Theorem 1). Equalities (iii), (v), and (vi) are obtained by using the definition of continuous convolutions. Equality (iv) is a simple rearrangement of both integrals.

References

  1. Abrishami V., Bilbao-Castro J., Vargas J., Marabini R., Carazo J., Sorzano C. A fast iterative convolution weighting approach for gridding-based direct fourier three-dimensional reconstruction with correction for the contrast transfer function. Ultramicroscopy. 2015;157:79–87. doi: 10.1016/j.ultramic.2015.05.018. [DOI] [PubMed] [Google Scholar]
  2. Adelson E.H., Anderson C.H., Bergen J.R., Burt P.J., Ogden J.M. Pyramid methods in image processing. RCA Eng. 1984;29(6):33–41. [Google Scholar]
  3. Bartesaghi A., Matthies D., Banerjee S., Merk A., Subramaniam S. Structure of β-galactosidase at 2.2Å resolution obtained by cryo-electron microscopy. Proc. Nat. Acad. Sci. 2014;111(32):11709–11714. doi: 10.1073/pnas.1402809111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Boyd S., Vandenberghe L. Cambridge University Press; 2004. Convex Optimization. [Google Scholar]
  5. Boyd S., Parikh N., Chu E., Peleato B., Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations Trends Mach. Learn. 2010;3(1):1–122. [Google Scholar]
  6. Campbell M.G., Veesler D., Cheng A., Potter C.S., Carragher B. 2.8 å resolution reconstruction of the thermoplasma acidophilum 20s proteasome using cryo-electron microscopy. Elife. 2015;4:e06380. doi: 10.7554/eLife.06380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Candès E.J., Romberg J.K., Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006;59(8):1207–1223. [Google Scholar]
  8. Carazo J., Sorzano C., Otón J., Marabini R., Vargas J. Three-dimensional reconstruction methods in single particle analysis from transmission electron microscopy data. Arch. Biochem. Biophys. 2015;581:39–48. doi: 10.1016/j.abb.2015.05.003. [DOI] [PubMed] [Google Scholar]
  9. Chavez C.E., Alonzo-Atienza F., Alvarez D. IEEE; 2013. Avoiding the inverse crime in the inverse problem of electrocardiography: estimating the shape and location of cardiac ischemia; pp. 687–690. (Computing in Cardiology Conference (CinC)). [Google Scholar]
  10. Cheng Y., Grigorieff N., Penczek P.A., Walz T. A primer to single-particle cryo-electron microscopy. Cell. 2015;161(3):438–449. doi: 10.1016/j.cell.2015.03.050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Combettes P.L., Pesquet J.-C. Springer; 2011. Proximal splitting methods in signal processing; pp. 185–212. (Fixed-point algorithms for inverse problems in science and engineering). [Google Scholar]
  12. de la Rosa-Trevín J., Quintana A., del Cano L., Zaldívar A., Foche I., Gutiérrez J., Gómez-Blanco J., Burguet-Castell J., Cuenca-Alba J., Abrishami V., Vargas J., Otón J., Sharov G., Vilas J., Navas J., Conesa P., Kazemi M., Marabini R., Sorzano C., Carazo J. Scipion: a software framework toward integration, reproducibility and validation in 3d electron microscopy. J. Struct. Biol. 2016;195(1):93–99. doi: 10.1016/j.jsb.2016.04.010. [DOI] [PubMed] [Google Scholar]
  13. Dengler J. A multi-resolution approach to the 3d reconstruction from an electron microscope tilt series solving the alignment problem without gold particles. Ultramicroscopy. 1989;30(3):337–348. [Google Scholar]
  14. Desco M., Hernandez J., Santos A., Brammer M. Multiresolution analysis in fmri: sensitivity and specificity in the detection of brain activation. Hum. Brain Mapp. 2001;14(1):16–27. doi: 10.1002/hbm.1038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Deserno, M., 2004. How to generate equidistributed points on the surface of a sphere, P.-If Polymerforshung (Ed.).
  16. Dubochet J., Adrian M., Chang J.-J., Homo J.-C., Lepault J., McDowall A.W., Schultz P. Cryo-electron microscopy of vitrified specimens. Q. Rev. Biophys. 1988;21(2):129–228. doi: 10.1017/s0033583500004297. [DOI] [PubMed] [Google Scholar]
  17. Fergus R., Singh B., Hertzmann A., Roweis S.T., Freeman W.T. vol. 25. ACM; 2006. Removing camera shake from a single photograph; pp. 787–794. (ACM Transactions on Graphics (TOG)). [Google Scholar]
  18. Fernandez-Leiro R., Scheres S.H. Unravelling biological macromolecules with cryo-electron microscopy. Nature. 2016;537(7620):339. doi: 10.1038/nature19948. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Frank J. Oxford University Press; 2006. Three-dimensional Electron Microscopy of Macromolecular Assemblies: Visualization of Biological Molecules in their Native State. [Google Scholar]
  20. Frank J., Shimkin B., Dowse H. Spider–a modular software system for electron image processing. Ultramicroscopy. 1981;6(4):343–357. [Google Scholar]
  21. Gordon R., Bender R., Herman G.T. Algebraic reconstruction techniques (art) for three-dimensional electron microscopy and x-ray photography. J. Theor. Biol. 1970;29(3):471–481. doi: 10.1016/0022-5193(70)90109-8. [DOI] [PubMed] [Google Scholar]
  22. Grigorieff N. Frealign: high-resolution refinement of single particle structures. J. Struct. Biol. 2007;157(1):117–125. doi: 10.1016/j.jsb.2006.05.004. [DOI] [PubMed] [Google Scholar]
  23. Henderson R. Avoiding the pitfalls of single particle cryo-electron microscopy: Einstein from noise. Proc. Nat. Acad. Sci. 2013;110(45):18037–18041. doi: 10.1073/pnas.1314449110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Henderson R., Chen S., Chen J.Z., Grigorieff N., Passmore L.A., Ciccarelli L., Rubinstein J.L., Crowther R.A., Stewart P.L., Rosenthal P.B. Tilt-pair analysis of images from a range of different specimens in single-particle electron cryomicroscopy. J. Mol. Biol. 2011;413(5):1028–1046. doi: 10.1016/j.jmb.2011.09.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Henderson R., Sali A., Baker M.L., Carragher B., Devkota B., Downing K.H., Egelman E.H., Feng Z., Frank J., Grigorieff N., Jiang W., Ludtke S.J., Medalia O., Penczek P.A., Rosenthal P.B., Rossmann M.G., Schmid M.F., Schröder G.F., Steven A.C., Stokes D.L., Westbrook J.D., Wriggers W., Yang H., Young J., Berman H.M., Chiu W., Kleywegt G.J., Lawson C.L. Outcome of the first electron microscopy validation task force meeting. Structure. 2012;20(2):205–214. doi: 10.1016/j.str.2011.12.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hohn M., Tang G., Goodyear G., Baldwin P.R., Huang Z., Penczek P.A., Yang C., Glaeser R.M., Adams P.D., Ludtke S.J. Sparx, a new environment for cryo-em image processing. J. Struct. Biol. 2007;157(1):47–55. doi: 10.1016/j.jsb.2006.07.003. [DOI] [PubMed] [Google Scholar]
  27. Jin K.H., McCann M.T., Froustey E., Unser M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 2017;26(9):4509–4522. doi: 10.1109/TIP.2017.2713099. [DOI] [PubMed] [Google Scholar]
  28. Lewitt R.M. Multidimensional digital image representations using generalized kaiser–bessel window functions. J. Opt. Soc. Am. A. 1990;7(10):1834–1846. doi: 10.1364/josaa.7.001834. [DOI] [PubMed] [Google Scholar]
  29. Li M., Xu G., Sorzano C.O., Sun F., Bajaj C.L. Single-particle reconstruction using l2-gradient flow. J. Struct. Biol. 2011;176(3):259–267. doi: 10.1016/j.jsb.2011.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Marabini R., Herman G.T., Carazo J.M. 3d reconstruction in electron microscopy using art with smooth spherically symmetric volume elements (blobs) Ultramicroscopy. 1998;72(1–2):53–65. doi: 10.1016/s0304-3991(97)00127-7. [DOI] [PubMed] [Google Scholar]
  31. McCann M.T., Nilchian M., Stampanoni M., Unser M. Fast 3d reconstruction method for differential phase contrast X-ray ct. Opt. Express. 2016;24(13):14564–14581. doi: 10.1364/OE.24.014564. [DOI] [PubMed] [Google Scholar]
  32. Milne J.L., Borgnia M.J., Bartesaghi A., Tran E.E., Earl L.A., Schauder D.M., Lengyel J., Pierson J., Patwardhan A., Subramaniam S. Cryo-electron microscopy—a primer for the non-microscopist. FEBS J. 2013;280(1):28–45. doi: 10.1111/febs.12078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Natterer F. The mathematics of computerized tomography. Soc. Ind. Appl. Math. 2001 [Google Scholar]
  34. Nilchian M., Ward J.P., Vonesch C., Unser M. Optimized kaiser–bessel window functions for computed tomography. IEEE Trans. Image Process. 2015;24(11):3826–3833. doi: 10.1109/TIP.2015.2451955. [DOI] [PubMed] [Google Scholar]
  35. Orlova E., Saibil H.R. Structural analysis of macromolecular assemblies by electron microscopy. Chem. Rev. 2011;111(12):7710–7748. doi: 10.1021/cr100353t. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Penczek P.A., Grassucci R.A., Frank J. The ribosome at improved resolution: new techniques for merging and orientation refinement in 3d cryo-electron microscopy of biological particles. Ultramicroscopy. 1994;53(3):251–270. doi: 10.1016/0304-3991(94)90038-8. [DOI] [PubMed] [Google Scholar]
  37. Penczek P., Zhu J., Schröder R., Frank J. Three dimensional reconstruction with contrast transfer compensation from defocus series. Scanning Microsc. 1997;11:147–154. [Google Scholar]
  38. Penczek P.A., Renka R., Schomberg H. Gridding-based direct fourier inversion of the three-dimensional ray transform. J. Opt. Soc. Am. A. 2004;21(4):499–509. doi: 10.1364/josaa.21.000499. [DOI] [PubMed] [Google Scholar]
  39. Pettersen E.F., Goddard T.D., Huang C.C., Couch G.S., Greenblatt D.M., Meng E.C., Ferrin T.E. Ucsf chimera–a visualization system for exploratory research and analysis. J. Comput. Chem. 2004;25(13):1605–1612. doi: 10.1002/jcc.20084. [DOI] [PubMed] [Google Scholar]
  40. Punjani A., Rubinstein J.L., Fleet D.J., Brubaker M.A. cryosparc: algorithms for rapid unsupervised cryo-em structure determination. Nat. Methods. 2017;14(3):290. doi: 10.1038/nmeth.4169. [DOI] [PubMed] [Google Scholar]
  41. Ramani S., Fessler J.A. A splitting-based iterative algorithm for accelerated statistical x-ray CT reconstruction. IEEE Trans. Med. Imaging. 2012;31(3):677–688. doi: 10.1109/TMI.2011.2175233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Rudin L.I., Osher S., Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena. 1992;60(1–4):259–268. [Google Scholar]
  43. Ruiz P., Zhou X., Mateos J., Molina R., Katsaggelos A.K. Variational bayesian blind image deconvolution: a review. Digital Signal Processing. 2015;47:116–127. [Google Scholar]
  44. Saad A., Chiu W. vol. 4. IEEE; 2000. Hierarchical wavelets projection matching for orientation determination of low contrast electron cryomicroscopic images of icosahedral virus particles; pp. 2270–2273. (Acoustics, Speech, and Signal Processing, 2000. ICASSP’00. Proceedings. 2000 IEEE International Conference on). [Google Scholar]
  45. Scheres S.H. Relion: implementation of a bayesian approach to cryo-em structure determination. J. Struct. Biol. 2012;180(3):519–530. doi: 10.1016/j.jsb.2012.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Scheres S.H., Chen S. Prevention of overfitting in cryo-em structure determination. Nat. Methods. 2012;9(9):853. doi: 10.1038/nmeth.2115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Sigworth F.J., Doerschuk P.C., Carazo J.-M., Scheres S.H. vol. 482. Elsevier; 2010. An introduction to maximum-likelihood methods in cryo-em; pp. 263–294. (Methods in Enzymology). [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Sorzano C., Jonić S., El-Bez C., Carazo J., De Carlo S., Thévenaz P., Unser M. A multiresolution approach to orientation assignment in 3d electron microscopy of single particles. J. Struct. Biol. 2004;146(3):381–392. doi: 10.1016/j.jsb.2004.01.006. [DOI] [PubMed] [Google Scholar]
  49. Sorzano C., De La Fraga L., Clackdoyle R., Carazo J. Normalizing projection images: a study of image normalizing procedures for single particle three-dimensional electron microscopy. Ultramicroscopy. 2004;101(2–4):129–138. doi: 10.1016/j.ultramic.2004.04.004. [DOI] [PubMed] [Google Scholar]
  50. Sorzano C., Marabini R., Velázquez-Muriel J., Bilbao-Castro J.R., Scheres S.H., Carazo J.M., Pascual-Montano A. Xmipp: a new generation of an open-source image processing package for electron microscopy. J. Struct. Biol. 2004;148(2):194–204. doi: 10.1016/j.jsb.2004.06.006. [DOI] [PubMed] [Google Scholar]
  51. Sorzano C.O.S., Marabini R., Pascual-Montano A., Scheres S.H., Carazo J.M. Optimization problems in electron microscopy of single particles. Ann. Oper. Res. 2006;148(1):133–165. [Google Scholar]
  52. Sorzano C., Bilbao-Castro J., Shkolnisky Y., Alcorlo M., Melero R., Caffarena-Fernández G., Li M., Xu G., Marabini R., Carazo J. A clustering approach to multireference alignment of single-particle projections in electron microscopy. J. Struct. Biol. 2010;171(2):197–206. doi: 10.1016/j.jsb.2010.03.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Sorzano C., Vargas J., de la Rosa-Trevin J., Oton J., Alvarez-Cabrera A., Abrishami V., Sesmero E., Marabini R., Carazo J. A statistical approach to the initial volume problem in single particle analysis by electron microscopy. J. Struct. Biol. 2015;189(3):213–219. doi: 10.1016/j.jsb.2015.01.009. [DOI] [PubMed] [Google Scholar]
  54. Sorzano C.O.S., Vargas J., Otón J., de la Rosa-Trevın J., Vilas J., Kazemi M., Melero R., Del Caño L., Cuenca J., Conesa P., Gomez-Blanco J. A survey of the use of iterative reconstruction algorithms in electron microscopy. BioMed Res. Int. 2017 doi: 10.1155/2017/6482567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Sorzano, C.O.S., Vargas, J., de la Rosa-Trevín, J.M., Jiménez, A., Maluenda, D., Melero, R., Martínez, M., Ramírez-Aportela, E., Conesa, P., Vilas, J.L., Marabini, R., 2018. A new algorithm for high-resolution reconstruction of single particles by electron microscopy. J. Struct. Biol. [DOI] [PubMed]
  56. Tang G., Peng L., Baldwin P.R., Mann D.S., Jiang W., Rees I., Ludtke S.J. Eman2: an extensible image processing suite for electron microscopy. J. Struct. Biol. 2007;157(1):38–46. doi: 10.1016/j.jsb.2006.05.009. [DOI] [PubMed] [Google Scholar]
  57. Thévenaz P., Ruttimann U.E., Unser M. A pyramid approach to subpixel registration based on intensity. IEEE Trans. Image Process. 1998;7(1):27–41. doi: 10.1109/83.650848. [DOI] [PubMed] [Google Scholar]
  58. Tosic I., Frossard P. Dictionary learning. IEEE Signal Process. Mag. 2011;28(2):27–38. [Google Scholar]
  59. Unser M. Sampling—50 years after Shannon. Proc. IEEE. 2000;88(4):569–587. [Google Scholar]
  60. Unser M., Aldroubi A. A review of wavelets in biomedical applications. Proc. IEEE. 1996;84(4):626–638. [Google Scholar]
  61. Wirgin, A., 2004. The inverse crime, arXiv preprint math-ph/0401050.
  62. Zhu J., Penczek P.A., Schröder R., Frank J. Three-dimensional reconstruction with contrast transfer function correction from energy-filtered cryoelectron micrographs: procedure and application to the 70sescherichia coliribosome. J. Struct. Biol. 1997;118(3):197–219. doi: 10.1006/jsbi.1997.3845. [DOI] [PubMed] [Google Scholar]

RESOURCES