Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jan 29.
Published in final edited form as: Methods Mol Biol. 2018;1688:341–352. doi: 10.1007/978-1-4939-7386-6_15

Practical Nonuniform Sampling and Non-Fourier Spectral Reconstruction for Multidimensional NMR

Mark W Maciejewski 1, Adam D Schuyler 1, Jeffrey C Hoch 1
PMCID: PMC6350247  NIHMSID: NIHMS1007616  PMID: 29151216

Abstract

A general approach for accelerating multidimensional NMR experiments via nonuniform sampling and maximum entropy spectral reconstruction was first demonstrated by Laue and colleagues in 1987. Following decades of continual improvements involving dozens of software packages for non-Fourier spectral analysis and many different schemes for nonuniform sampling, we still lack a clear consensus on best practices for sampling or spectral reconstruction, and programs for processing nonuniformly sampled data are not particularly user-friendly. Nevertheless, it is possible to discern conservative and general guidelines for nonuniform sampling and spectral reconstruction. Here we describe a robust semi-automated workflow that employs these guidelines for simplifying the selection of a sampling schedule and the processing of the resulting nonuniformly sampled multidimensional NMR data. Our approach is based on NMRbox, a shared platform for NMR software that facilitates workflow development and execution, and enables rapid comparison of alternate approaches.

Keywords: nonuniform sampling, NMR, multidimensional, Maximum Entropy Reconstruction (MaxEnt), Spectral Reconstruction, nus-tool

1. Introduction

The difficulty of obtaining high resolution spectral estimates from short data vectors using the discrete Fourier Transform (DFT) was well understood from the inception of pulsed FT-NMR[1], but did not present a serious obstacle until the advent of multi-dimensional NMR experiments. These experiments employ a paradigm introduced by Jeener[2], in which one or more indirect time dimensions are sampled parametrically, by repeating the experiment while incrementing the time delays corresponding to the indirect time dimensions. Consequently, the total experiment time is directly proportional to the number of sampled evolution times along the indirect dimensions; long data vectors needed for high resolution require lengthy experiments. With conventional sampling at uniform intervals required by the DFT, sampling sufficient for high resolution in the indirect dimensions can become prohibitive for 3 and higher dimensional experiments. With sampling constrained by the Nyquist sampling theorem, the sampling problem for multidimensional experiments becomes more acute with increasing magnetic field.

A general solution to this sampling problem was introduced by Laue and colleagues[3, 4]: instead of sampling uniformly in the indirect dimensions, a subset of evolution times from the uniform (Nyquist) grid is sampled, and a non-Fourier method of spectrum analysis that is capable of computing high-resolution spectra despite the “missing” data. The method employed by Laue and colleagues was maximum entropy (MaxEnt) reconstruction[5], a powerful and robust regularization method that makes no assumption about the nature of the signals. A closely related regularization method that employs the l1-norm as the regularization functional instead of the entropy is used in an approach known as Compressed Sensing (CS)[6]. Both methods utilize data sampled non-uniformly from the uniform Nyquist grid. They both iteratively determine the spectrum by regularizing a trial spectrum (minimizing the l1-norm or maximizing the entropy), and inverting the spectrum via inverse DFT to generate a “mock” data set that is constrained (either approximately or exactly, in different approaches) to agree with the measured data. There are many other methods capable of computing spectra from non-uniformly sampled data[7], including a large class of methods that model the time domain data as consisting of a sum of exponentially decaying sinusoids. The algorithms employed by different methods have been described elsewhere, but a thorough critical comparison of the results from different approaches has yet to been reported.

The results, regardless of the method used to compute the spectrum, depend on the number and distribution of indirect evolution times that are sampled[810]. The field of compressed sensing (CS) provides theorems[11] specifying the minimal number of samples needed to faithfully reconstruct the spectrum via l1-norm minimization that depends on the sparsity of the spectrum to be recovered, that is the fraction of elements in the spectrum that are non-zero. While these theorems indicate the presence of minimal sampling thresholds for accurate recovery of the spectrum from nonuniformly sampled data, their quantitative meaning for NMR spectroscopy remains unclear, because NMR resonances have finite linewidths that span multiple elements of the spectrum; in fact, the Lorentzian lineshape is not band-limited. Supporting evidence that the CS theorems don’t directly apply to Lorenztian lines is that steps taken to make the spectrum more sparse improve the performance of l1-norm minimization for spectral recovery[12], mitigating some artifacts[13]. Since we cannot rely on CS theorems to estimate the amount of sampling needed for accurate recovery of NMR spectra, we rely instead upon empirical experience to provide conservative guidelines.

Two main principles governing the design of robust and efficient sampling schemes are that the sampling should be biased toward the part of the signal with the highest data content and that it should be as incoherent as possible. The latter helps to ensure that sampling artifacts are as small as possible, since no non-Fourier method is capable of perfect suppression of sampling artifacts in the presence of noise. The former is needed to ensure high sensitivity. Quantitative measures of both properties exist that can be computed for sample schedules a priori. We define the sampling function as a real-valued vector that is isomorphic with the sampling grid, having the value 1 at elements corresponding to indirect evolution times that are sampled, and the value zero for evolution times not sampled. For an exponentially decaying signal with two indirect dimensions, the relative sensitivity of a scheme with sampling function K spanning a two-dimensional grid with size n1 by n2 is given by

r(K)=i=1n1j=1n2Kijpiji=1n1j=1n2pij (1)

where the elements of p are given by

pij=exp{(R2(1)SW1)(R2(2)SW2)} (2)

and R2(1) and R2(2) are the signal envelope decay rates, and SW1 and SW2 are the spectral widths in the two dimensions. Eqs. (1) and (2) generalize to other signal envelopes (i.e non-exponential decay) and arbitrary dimensionality.

A measure of the incoherence of a sampling schedule is the peak-to-sidelobe ratio (PSR) for the point-spread function (PSF) corresponding to the sampling scheme[14, 15]. The PSF is the DFT of the sampling function. The PSF has a strong central (zero frequency) component, with non-zero frequency values corresponding to sampling artifacts (for uniform sampling the PSF has a single non-zero value, the zero frequency component). The PSR is the ratio of the value of the zero-frequency component to the value of the largest non-zero-frequency component. Coherent sampling schemes have small PSR, and large sampling artifacts; incoherent sampling schemes have large PSR and small sampling artifacts.

2.1. General guidelines for designing robust sampling schedules

Here we consider parameters of the experiment and sample that influence the design of robust sampling schedules. A more detailed discussion was recently provided by Rovnyak and colleagues[16].

1. Dimensionality.

Nonuniform sampling of a single indirect dimension for two-dimensional experiments presents special challenges, because sampling artifacts are coherent across the orthogonal, uniformly sampled direct dimension[17, 18]. Nonetheless conservative application of NUS, with sampling coverage of 0.3 or greater of the Nyquist grid, works well. If the dynamic range of the experiment is high, higher sampling coverage is appropriate. NUS from an oversampled grid has been shown to help suppress sampling artifacts by shifting them out of the spectral window. For three- and higher dimensional experiments with two or more indirect dimensions, sampling coverage of (0.3)k, where k is the number of indirect dimensions, works well even for experiments with high dynamic range. When dynamic range is low and sensitivity is high, more aggressive undersampling with coverage of (0.1)k has been reported to be successful.

2. Signal envelope.

There are three main classes of signal envelopes encountered in NMR, exponential decay, sine-modulated exponential decay, and stationary (constant-time experiments). Knowledge of the envelope type and decay rates is especially important for designing sampling schedules that attain high sensitivity. For the majority of experiments that yield exponentially decaying sinusoids, an exponentially-biased sampling distribution is appropriate. For J-modulated experiments, a cosine-modulated exponential decay is appropriate, with the cosine frequency matched to the signal anti-node. For constant-time experiments, an unweigthed random (or other suitable sampling distribution; see below) is appropriate. For exponentially-biased sampling, it has been shown that an “over-matched” bias (i.e. a bias at a decay rate faster than the rate of signal envelop decay) is beneficial[19] and that sensitivity per unit measuring time is improved by a factor of two when an over-matched exponential sample weighting of 2×T2 is used[20].

3. Maximum increment.

For Fourier methods of spectral analysis, resolution is largely determined by the maximum evolution time tmax. Non-Fourier methods are in principle capable of super-resolution, that is, able to resolve frequencies closer than 1/tmax. Nevertheless the attainable resolution still depends on the maximum evolution time sampled, and we find a maximum evolution time of 3×T2 (close to the π×T2 maximum evolution time needed for the DFT to achieve a digital resolution sufficient to resolve peaks separated by their natural linewidths[1, 21]) helps ensure that resolution is determined by the magnet and sample characteristics, and is not limited by sampling. Of course the advantage when using NUS is that not all samples corresponding to the maximum evolution times (the “far edges” of the sampling grid) need to be sampled.

4. Sampling distribution.

The Compressed Sensing theorems are based on fully incoherent, random sampling. However random sampling leads to poor sensitivity when used to detect exponentially decaying sinusoids. When the sampling distribution is biased to sample more of the signal power, it has been shown[16, 22] that biased (e.g. exponential) random sampling leads to large gaps in the sampling coverage that can cause unwanted sampling artifacts. Approaches that utilize non-random sampling distributions to distribute gaps more evenly, eliminating large gaps, have been proposed, including Poisson Gap[23], quantile (D. Rovnyak, personal communication; manuscript in preparation), and quasi-random[24] distributions. There is not yet a consensus on the best approach.

2.2. A tool for generating and analyzing sample schedules

In addition to the variety of non-Fourier reconstruction techniques available to the spectroscopist, there are numerous schemes for generating sample schedules. These schemes largely follow the same set of heuristic guidelines noted above, but their differences in implementation and variety of configuration parameters is daunting. In addition, a systematic quantitative analysis of many popular sampling schemes has not been performed, largely because there has been no common platform for such an analysis and there is no consensus on the appropriate quantitative metrics. In order to address both of these needs, we introduce nus-tool, a sample schedule utility distributed through NMRbox. The goal of nus-tool is to serve as a flexible interface that is able to incorporate various existing sampling schemes and analysis metrics as modular additions. The nus-tool will facilitate the utilization of NUS and provide a platform for systematic analysis of NUS schedules.

The nus-tool utility is a GUI interface that guides a user through sample schedule construction and provides facilities for writing the schedule to a file, reading a schedule from a file, performing basic a priori analyses, and plotting. The components of nus-tool are described below, and a screenshot of the utility is shown in Figure 1.

Figure 1:

Figure 1:

The nus-tool GUI (left) and sample schedule (right). The left panel of the nus-tool GUI collects parameter inputs from the user and the right panel displays a status box for text output. The example schedule is full-component (i.e. the same sampling schedule is used across all 4 hypercomplex components for this 2D case). The nus-tool fully supports partial-component sampling (i.e. independent sampling of each hypercomplex component) and will be useful in further characterizing the benefits which it affords.

  1. generate – Sample schedules may be generated using random sampling, exponentially biased sampling, and a set of schemes based on quantiles (D. Rovnyak, personal communication) that prevent large gaps. When sample schedules are generated, the command executed behind the GUI is reported in the status window, so that it may be captured and run from a script.

  2. plot – Sample schedules may be shown in a scatter plot. Plots are currently limited to 2D, but tools for selecting 2D planes through higher dimensional schedules will be implemented. In addition, projections that sum over dimensions will be included; this will allow the sampling distribution along each axis to be visually inspected.

  3. read/write – Sample schedule file formats supported are Varian, Bruker, and RNMRTK. There is additional ongoing work by the NMRbox team to develop a Nonuniform Exchange (NEX) file format. This universal format will aid with workflow analysis in CONNJUR-WB [25]. The draft specification for NEX is currently included.

  4. analyze – Metrics for sensitivity (intrinsic sensitivity), resolution (mean/max evolution times), and artifact level (PSR) are reported.

Once generated, sampling schedules produced by nus-tool are used by the spectrometer to perform the experiment. Details of implementing NUS experiments vary among instruments from different manufacturers, and among different versions of operating software from the instrument vendors.

2.3. General guidelines for spectral reconstruction

Maximum entropy (MaxEnt) reconstruction belongs to the class of regularization methods (which includes compressed sensing, CS) that make no assumptions about the nature of the signals, and thus are among the most robust non-Fourier methods. The two parameters governing MaxEnt reconstruction are aim, the level of agreement between the mock-data obtained by inverse DFT of the final spectrum and the empirically measured data, and def, a parameter related to the sensitivity of the spectrometer. There are two main regimes for values of aim; in the Bayesian regime aim is comparable to the noise in the data. In the “MINT” or “FM” regime[20, 26], the spectral reconstruction is nearly linear. The latter regime gives smoother reconstructions and few false positives. Although the MINT regime leads to noisier reconstructions, MINT is useful when quantification of peak intensities or preservation of spectral lineshapes are important.

3. Methods- general spectrum analysis workflow

Here we describe a general workflow for processing data from a 3D NUS NMR experiment with MaxEnt, as implemented in the Rowland NMR Toolkit (RNMRTK). Figure 2 illustrates various steps in the workflow, and compares MaxEnt reconstruction of the NUS data with DFT of the data with zeroes used in place of FIDs not sampled (nuDFT).

Figure 2:

Figure 2:

Steps in the workflow for processing of 3D NUS data in NMRbox. Panel A – (left) The first t1/t3 plane of the fully expanded time domain data with zeros filled in for FIDs not in the sample schedule. (right) The sample schedule indicating t2 (CACB, x axis) and t1 (N, y axis) intervals collected. The first column in the sample schedule, corresponding to the first t2 plane, corresponds to the collected FIDs on the left. Panel B – A 2D HN/CACB summed projection of the 3D nuDFT spectrum with a 1D slice corresponding to the position of the solid vertical line. Here the sampling noise summed across the projection overwhelms the signals. The same 1D slice is also shown in panels C, E, and F. Panel C – A single HN/CACB plane (plane 79) of the 3D nuDFT spectrum. While significant sampling artifacts are present, the spectrum is suitable for determining phase and other processing parameters. Panel D – The spectrum of a single FID with the longest combined evolution delay in the sample schedule, prior to extraction along the amide proton region, which is used by the program noisecalc to estimate values for the MaxEnt parameters def and aim. Panel E – The 2D HN/CACB summed projection of the automated MaxEnt reconstruction. Panel F – A single HN/CACB plane of the 3D MaxEnt reconstruction. The significant reduction in sampling noise in the MaxEnt reconstruction as compared to the nuDFT is readily apparent in Panels E and F.

3.1. Manual approach

  1. Expand – Expand the Agilent or Bruker NUS time domain data into a full multi-dimensional data set with zeros replacing any FIDs not in the sample schedule. This is most easily implemented with nusExpand.tcl which is part of the NMRPipe conversion utility and whose syntax is generally determined automatically by the “bruker” or “varian” commands in the NMRPipe[27] distribution. Several significant advantages occur by expanding the data prior to data conversion including the ability to process the data with a non-uniform DFT (nuDFT). While the nuDFT will give the poorest result with no suppression of sampling artifacts, it is a quick method of determining important processing parameters that are needed such as data reversals, sign alterations, regions to extract, phase values, and to validate the data conversion. In addition, an intermediate file from the nuDFT will be utilized later in the workflow for determining reasonable values for the MaxEnt parameters def and aim.

  2. Convert to NMRPipe – Convert time domain data from the vendor specific to NMRPipe file format. During the conversion meta-data regarding referencing can be modified, Bruker digital oversampled FIDs are corrected, and any data shuffling due to sensitivity enhancement in indirect dimensions is performed. The data conversion script is created automatically by the same “bruker” or “varian” commands as with Step 1.

  3. Convert to RNMRTK – Data is converted from NMRPipe to RNMRTK format with the program spectrum-translator. Note that the NMRPipe formatted data must be in a single file for spectrum-translator to function properly.

  4. Process the spectrum with a non-uniform DFT (nuDFT) – Data are processed along all dimensions with DFT with traditional processing steps such as solvent suppression, data reversals, apodization, zero filling, and phasing. However, a key difference is that an intermediate file is saved after the nuDFT along the acquisition dimension prior to extracting a region of interest and deleting the imaginary data. The intermediate saved data will be used in the next step to automatically determine reasonable values for the MaxEnt parameters; def.

  5. Determine reasonable values for def, and aim – The 1D spectrum with the longest combined t1/t2 evolution time from the intermediate saved data is analyzed by the program noisecalc which determines reasonable values for def and aim based on the RMS noise of the spectrum after purging the highest amplitude signals and skipping the central solvent region[28, 29].

  6. Perform preliminary maximum entropy reconstructions to determine λ – When processing t1/t2 planes of 3D data sets with MaxEnt in constant-aim mode each plane along the t3 dimension will have a different scaling, as determined by the value of λ, the Lagrange multiplier that determines the weight applied to the constraint term relative to the entropy (ref), and thus cause distortions in peak shapes. To resolve this the MaxEnt calculation is run in constant-λ[30] mode where all planes are scaled equally. MaxEnt calculations along the t1/t2 planes is performed in constant-aim mode using the def and aim values determined by the noisecalc analysis. Each plane will converge to a λ value and at the end the λ values are averaged, which is then used to reprocess the spectrum in constant-λ mode.

  7. Process the spectrum with maximum entropy reconstruction – A MaxEnt reconstruction along all t1/t2 planes is performed in constant-λ mode using the def value from earlier and the averaged λ value from the previous step.

  8. Examine the spectrum and adjust the def multiplication factor if desired – It is often desirable to set def slightly less than the value determined by noisecalc. By setting def lower the converged λ value will also decrease and the number of loops to converge will increase. This can lead to a spectrum with a better cosmetic appearance, but if pushed too far will lead to noise spikes. Good results are generally obtained by multiplying def by a value from 1.0 to 0.1.

3.2. Automated approach

The steps outlined in the above workflow can be tedious and rely on having strong technical skills with the Rowland NMR Toolkit program suite. To alleviate this burden we have created a program called auto-maxent which handles all of the steps outlined in the workflow with the exception of the generation of the NMRPipe file conversion script. To implement the workflow these steps are performed:

  1. The NMRPipe “bruker” or “varian” command is run from the directory with the raw time domain data.
    1. The “Read Parameters” button is selected.
    2. For Bruker data the checkbox “During Conversion (Normal FID)” is checked.
    3. Meta data for referencing and axis labels can be adjusted if desired.
    4. The “Save Script” button is selected to save the script as “fid.com” and the dialog box closed.
  2. The data is converted by running “auto-maxent convert”. This will run the NMRPipe conversion script, which will expand the data and then convert to NMRPipe format. Data will then be converted to RNMRTK format with spectrum-translator.

  3. A configuration file named process.cfg is created by running “auto-maxent setup”.

    The configuration file will be populated with default values but is then edited to be sure information about the processing such as zerofill sizes, phases, axis reversals, sign alterations, and apodization, are correct and to provide additional information such as output file name, number of threads, and a region of interest to extract. Not all values will be known immediately and the configuration file will continue to be edited in an iterative manner while performing nuDFTs of the spectrum.

  4. The data is processed with nuDFT by running “auto-maxent dft”. The results are viewed in NMRDraw or contour and values in the process.cfg configuration file are updated iteratively until a correct looking spectrum is attained, although with significant sampling noise present.

  5. Once the nuDFT produces a spectrum with the correct processing parameters and a region of interest has been selected the data is processed with MaxEnt reconstruction with “auto-maxent maxent”. The program will perform Steps 5–7 in the workflow above; FIDs are analyzed for reasonable values of def and aim, preliminary MaxEnt calculations are performed on t1/t2 planes to determine λ, and then the whole spectrum is processed with MaxEnt in constant-λ mode.

  6. The last step is to check the maximum number of loops for convergence and adjust the def multiplier if desired. Typically, the def multiplier is set between 1.0 to 0.1 and careful examination of weaker signals should be compared with the different def multipliers. Note that when the def multiplier is set low the spectrum may look cosmetically better, but 1D strips should also be examined to be sure the noise has a normal distribution and does not become spiky which would indicate def was set too low.

All software described here, as well as sample data sets, sampling schedules and a detailed step-by-step tutorial are available on the NMRbox platform. Access to NMRbox is free for academic and non-profit users by visiting https://nmrbox.org. NMRbox is provided by the National Center for Biomolecular NMR Data Processing and Analysis, an NIH/NIGMS Biomedical Technology Research Resource.

Acknowledgements

We thank Alan S. Stern, Gerard Weatherby, Frank Delaglio, David Rovnyak, and Levi Craft for useful discussions and technical support. Support for NMRbox from the US National Institutes for Health (via grant P41GM111135) is gratefully acknowledged. Support from NIH (via grant R21GM104517) for research on MaxEnt reconstruction is also gratefully acknowledged.

References

  • 1.Hoch JC and Stern AS (1996) NMR Data Processing. Wiley-Liss, New York. [Google Scholar]
  • 2.Jeener J (1971) Oral Presentation, Ampere International Summer School, Basko Polje Yugoslavia. [Google Scholar]
  • 3.Barna JCJ and Laue ED (1987) Conventional and exponential sampling for 2D NMR experiments with application to a 2D NMR spectrum of a protein. J. Magn. Reson 75:384–389. [Google Scholar]
  • 4.Barna JCJ, Laue ED, Mayger MR, Skilling J and Worrall SJP (1987) Exponential sampling, an alternative method for sampling in two-dimensional NMR experiments. Journal of Magnetic Resonance (1969) 73:69–77. [Google Scholar]
  • 5.Skilling J and Bryan RK (1984) Maximum entropy image reconstruction: general algorithm. Mon. Not. Royal Astr. Soc 211:111–124. [Google Scholar]
  • 6.Donoho DL (2006) Compressed Sensing. IEEE Transactions on Information Theory 52:1289–1306. [Google Scholar]
  • 7.Mobli M and Hoch JC (2014) Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR. Prog Nucl Magn Reson Spectrosc 83C:21–41. doi: 10.1016/j.pnmrs.2014.09.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Mobli M, Maciejewski MW, Schuyler AD, Stern AS and Hoch JC (2012) Sparse sampling methods in multidimensional NMR. Phys Chem Chem Phys 14:10835–43. doi: 10.1039/c2cp40174f [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hyberts SG, Arthanari H and Wagner G (2012) Applications of non-uniform sampling and processing. Top Curr Chem 316:125–48. doi: 10.1007/128_2011_187 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kazimierczuk K and Orekhov V (2015) Non-uniform sampling: post-Fourier era of NMR data collection and processing. Magn Reson Chem 53:921–6. doi: 10.1002/mrc.4284 [DOI] [PubMed] [Google Scholar]
  • 11.Donoho DL and Tanner J (2009) Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Phil. Trans. R. Soc 367:4273–4293. [DOI] [PubMed] [Google Scholar]
  • 12.Stern AS and Hoch JC (2015) A new approach to compressed sensing for NMR. Magn Reson Chem 53:908–12. doi: 10.1002/mrc.4287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Stern AS, Donoho DL and Hoch JC (2007) NMR data processing using iterative thresholding and minimum l(1)-norm reconstruction. J Magn Reson 188:295–300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lustig M, Donoho D and Pauly JM (2007) Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 58:1182–1195. [DOI] [PubMed] [Google Scholar]
  • 15.Schuyler AD, Maciejewski MW, Stern AS and Hoch JC (2013) Formalism for hypercomplex multidimensional NMR employing partial-component subsampling. J Magn Reson 227:20–4. doi: 10.1016/j.jmr.2012.11.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Palmer MR, Wenrich BR, Stahlfeld P and Rovnyak D (2014) Performance tuning non-uniform sampling for sensitivity enhancement of signal-limited biological NMR. J Biomol NMR 58:303–14. doi: 10.1007/s10858-014-9823-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hoch JC, Maciejewski MW and Filipovic B (2008) Randomization improves sparse sampling in multidimensional NMR. J Magn Reson 193:317–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Monajemi H (2016) Phase Transitions in Deterministic Compressed Sensing, with Application to Magnetic Resonance Spectroscopy. Stanford University. [Google Scholar]
  • 19.Schuyler AD, Maciejewski MW, Arthanari H and Hoch JC (2011) Knowledge-based nonuniform sampling in multidimensional NMR. J Biomol NMR 50:247–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Paramasivam S, Suiter CL, Hou GJ, Sun SJ, Palmer M, Hoch JC, Rovnyak D and Polenova T (2012) Enhanced Sensitivity by Nonuniform Sampling Enables Multidimensional MAS NMR Spectroscopy of Protein Assemblies. Journal of Physical Chemistry B 116:7416–7427. doi: Doi 10.1021/Jp3032786 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Rovnyak D, Sarcone M and Jiang Z (2011) Sensitivity enhancement for maximally resolved two-dimensional NMR by nonuniform sampling. Magn Reson Chem 49:483–491. [DOI] [PubMed] [Google Scholar]
  • 22.Hyberts SG, Takeuchi K and Wagner G (2010) Poisson-gap sampling and forward maximum entropy reconstruction for enhancing the resolution and sensitivity of protein NMR data. J Am Chem Soc 132:2145–7. doi: 10.1021/ja908004w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hyberts SG, Milbradt AG, Wagner AB, Arthanari H and Wagner G (2012) Application of iterative soft thresholding for fast reconstruction of NMR data non-uniformly sampled with multidimensional Poisson Gap scheduling. J Biomol NMR 52:315–27. doi: 10.1007/s10858-012-9611-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Worley B and Powers R (2015) Deterministic multidimensional nonuniform gap sampling. J Magn Reson 261:19–26. doi: 10.1016/j.jmr.2015.09.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Fenwick M, Weatherby G, Vyas J, Sesanker C, Martyn TO, Ellis HJ and Gryk MR (2015) CONNJUR Workflow Builder: a software integration environment for spectral reconstruction. J Biomol NMR 62:313–26. doi: 10.1007/s10858-015-9946-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hyberts SG, Heffron GJ, Tarragona NG, Solanky K, Edmonds KA, Luithardt H, Fejzo J, Chorev M, Aktas H, Colson K, Falchuk KH, Halperin JA and Wagner G (2007) Ultrahigh-resolution (1)H-(13)C HSQC spectra of metabolite mixtures using nonlinear sampling and forward maximum entropy reconstruction. J Am Chem Soc 129:5108–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Delaglio F, Grzesiek S, Vuister GW, Zhu G, Pfeifer J and Bax A (1995) NMRPipe: a multidimensional spectral processing system based on UNIX pipes. J Biomol NMR 6:277–93. [DOI] [PubMed] [Google Scholar]
  • 28.Mobli M, Maciejewski MW, Gryk MP and Hoch JC (2007) An automated tool for maximum entropy reconstruction of biomolecular NMR spectra. Nature Meth 4:3–4. [DOI] [PubMed] [Google Scholar]
  • 29.Mobli M, Maciejewski MW, Gryk MP and Hoch JC (2007) Automatic maximum entropy spectral reconstruction in NMR. J. Biomol. NMR 39:133–139. [DOI] [PubMed] [Google Scholar]
  • 30.Schmieder P, Stern AS, Wagner G and Hoch JC (1997) Quantification of Maximum Entropy Spectrum Reconstructions. J. Magn. Reson 125:332–339. [DOI] [PubMed] [Google Scholar]

RESOURCES