Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Apr 27.
Published in final edited form as: J Proteome Res. 2016 Oct 4;15(12):4763–4769. doi: 10.1021/acs.jproteome.6b00744

An Automated Pipeline to Monitor System Performance in Liquid Chromatography Tandem Mass Spectrometry Proteomic Experiments

Michael S Bereman 1,*, Joshua Beri 2, Vagisha Sharma 3, Cory Nathe 4, Josh Eckels 4, Brendan MacLean 3, Michael J MacCoss 3
PMCID: PMC5406750  NIHMSID: NIHMS856330  PMID: 27700092

Abstract

We report the development of a completely automated pipeline to monitor system suitability in bottom-up proteomic experiments. LC MS/MS runs are automatically imported into Skyline and multiple identification free metrics are extracted from targeted peptides. These data are then uploaded to the Panorama Skyline document repository where metrics can be viewed in a web based interface using powerful process control techniques, including Levey-Jennings and Pareto plots. The interface is versatile and takes user input which allows the user significant control over the visualization of the data. The pipeline is vendor and instrument type neutral, supports multiple acquisition techniques (e.g. MS 1 filtering, data independent acquisition, parallel reaction monitoring, and selected reaction monitoring), can track performance of multiple instruments, and requires no manual intervention aside from initial setup. Data can be viewed from any computer with internet access and a web browser -- facilitating sharing of QC data between researchers. Herein we describe the use of this pipeline, termed Panorama AutoQC, to evaluate LC MS/MS performance in a range of scenarios from identification of suboptimal instrument performance, evaluation of ultra-high pressure chromatography, to the identification of the major sources of variation throughout years of peptide data collection.

Keywords: System Suitability, Process Control, Proteomics, Tandem Mass Spectrometry, Control Charts

Graphical abstract

graphic file with name nihms856330u1.jpg

Introduction

An essential need within the proteomics community is the development of robust workflows to monitor LC MS/MS instrument performance in a longitudinal fashion.1-7 The questions arise: 1) How does instrument performance vary within an experiment (intra-experiment variability) and 2) how does it compare between experiments (inter-experiment variability)? To provide quantitative solutions to these questions requires the development and implementation of a consistent framework including appropriate peptide standards,8 suitable quality metrics, appropriate statistics, and the capacity to collect, store and view data in a longitudinal fashion. These endeavors are becoming increasingly important as mass spectrometry based proteomics continues to expand beyond the traditional field of analytical chemistry, liquid chromatography mass spectrometry systems become more advanced, and with continued aspirations of LC MS/MS based protein assays where quality control metrics for system and assay suitability are routinely used in the clinic. 9, 10

Most system suitability workflows begin with the selection of a peptide standard that is analyzed at fixed time intervals.1 These standards may be run prior to and at the end of an experiment, once a day, or systematically throughout a study. The complexity of this standard is variable and can range from neat peptides, a protein digest, and/or a complex cell lysate.5, 6 In addition, the choice of a suitable standard is often dependent on the nature of the proteomics experiment (i.e., targeted vs. discovery). Next, the user must identify appropriate metrics that are indicative of instrument performance. System suitability metrics in shotgun proteomic experiments fall into two categories: 1) Those requiring a database search (i.e., peptide identification metrics) and; 2) metrics that can be extracted from the data directly (i.e., peptide identification-free metrics). Peptide identification metrics can include total number of peptide spectral matches, peptide identifications or protein groups. Peptide identification-free metrics include more fundamental analytical figures of merit (retention time, peak area, full-width at half maximum, peak asymmetry, mass measurement accuracy, etc) and often are extracted on a targeted peptide basis. While peptide identification metrics have the advantage of evaluating the search pipeline, they do little to pinpoint the specific causes of performance deterioration and may miss changes that would impact quantification. For example, a decrease in total number of peptide spectral matches could be caused by decreased chromatographic performance, loss of overall sensitivity, or the loss of mass calibration. In addition, because the tracking of these metrics requires a database search, they do not lend themselves to real-time monitoring or early detection of sub-optimal instrument performance. By tracking fundamental analytical figures of merit, these issues are more readily detected and isolated.1, 11

Late identification of suboptimal instrument performance in a proteomics experiment (i.e., after completing data acquisition or even publication) is costly (e.g., wasted precious biological samples, loss of instrument time, incorrect biological conclusions, etc.) and less easily rectified than if problems were detected early (i.e. during data acquisition). This high cost of delayed identification of poor performance is also observed in various industries (e.g., manufacturing, pharmaceutical). These fields have implemented a well-established method called statistical process control (SPC) where the primary focus is on continuous improvement of process output via early detection followed by subsequent determination of the cause(s) of performance deterioration. The fundamental tool in SPC is the control chart which plots output as a function of time and can be used to identify when a process drifts outside acceptable limits.

We were the first to implement techniques founded in statistical process control 12, 13 to monitor system performance in LC MS/MS based proteomic experiments.11 In this initial report, Statistical Process Control in Proteomics (SProCoP), could be installed as an external tool14 within Skyline.15 Upon manual import of peptide standard data, LC MS/MS performance could be tracked using a combination of control charts, boxplots, and Pareto analysis on the local system. We greatly expand on this initial report with the development of a pipeline referred to as Panorama AutoQC, which includes: 1) full automation of SProCoP data pipeline from completion of data acquisition to data visualization; 2) longitudinal data storage and tracking; and 3) integration within the Panorama16 data management system. Panorama provides an interactive environment where the user has the versatility to zoom into specific time periods, remove files with gross errors, view peptide data in multiple ways, annotate runs that follow changes to hardware or software, control the determination of guide/reference sets used to establish statistical thresholds and the capability to share QC data with other scientists worldwide through a secure web site interface. In addition, the user can easily view, in Panorama or Skyline, the chromatograms from which the metrics were derived. Due to ease of use, automation, and versatility, we strongly believe this pipeline will encourage more widespread adoption of system suitability procedures in proteomic laboratories. Herein, we describe the use of Panorama AutoQC to detect sub-optimal LC MS/MS performance, evaluate various LC conditions, track performance longitudinally, and assess major sources of variation over years of liquid chromatograph-mass spectrometry data collection.

Materials and Methods

Materials

Formic acid, ammonium bicarbonate, dithiothreitol, and iodoacetamide, and bovine serum albumin were obtained from Sigma Aldrich (St. Louis, MO). Proteomics grade trypsin was purchased from Promega (Madison, WI). HPLC grade acetonitrile (ACN) and water were purchased from Burdick & Jackson (Muskegon, MI).

Methods

A stock peptide standard was created by digesting bovine serum albumin purchased from Sigma Aldrich (St. Louis, MO). The standard was then diluted and pipetted into 3000 × 30 μL (50 fmol/μL) aliquots and stored at -20 °C until use. Standards were analyzed every 4th or 5th injection throughout a range of bottom up proteomic experiments (e.g., cellular lysates, biological fluids, tissue) over a period of time spanning 28 months (March 2014 to July 2016) on a quadrupole orbitrap mass spectrometer (Q-Exactive Plus, Bremen, Germany) coupled to an EasyNano LC 1000 (San Jose, CA). The LC MS/MS method consisted of a 50 minute run time from which Mobile Phase B (100% ACN with 0.1% formic acid) was ramped from 0% to 40% over 30 minutes, then ramped to 80% over 2 minutes and held at 80% B for the next 6 minutes. The column was re-equilibrated for the final 12 minutes at 98% A (98/2 water/ACN with 0.1% formic acid). The scan cycle consisted of an MS1 full scan (@ 70,000 resolving power) from m/z 400 to 1400 followed by 6 parallel reaction monitoring (PRM) scans (@17,500 resolving power) that targeted 6 peptides from serum albumin, which in our laboratory have been found to be stable.

Panorama AutoQC

The Panorama AutoQC pipeline is initialized by specifying 1) a template Skyline document with QC peptides into which data files should be imported as they are acquired, 2) the local folder where QC data files are written, and 3) a folder on Panorama where the data should be uploaded. After the initial setup the pipeline runs fully automated and comprises three components: Skyline,15 Panorama,16 and AutoQC Loader, a utility program that automates the processing and uploading of QC results from the instrument computer to a Panorama server.

AutoQC Loader is a stand-alone program that can be installed and run on instrument control computers to automatically import QC data files into a Skyline document and upload it to Panorama. It detects newly acquired files in the user-specified data folder, and launches Skyline with command-line arguments, without showing a Skyline window, to automatically add the data files to the user-specified Skyline template document. After successful data import, the Skyline document is uploaded to a folder on Panorama. On the Panorama server, the Skyline document is parsed and results from new data files are added to existing QC data in the folder.

Panorama's support for organizing data into projects and folders lets researchers manage and visualize their QC data by instrument, project or any other criteria of their choosing, with the ability to control who has access to edit or view each folder. Summary information on a QC dashboard gives an overview of the pipeline status in a folder or across several folders.

AutoQC Loader can be downloaded and installed from http://skyline.gs.washington.edu/software/AutoQC/. Source code for AutoQC Loader is available through the ProteoWizard Source Forge Repository (https://sourceforge.net/p/proteowizard/code/HEAD/tree/trunk/) under the Skyline project. Panorama is developed as a module in the LabKey Server biomedical data management platform (http://www.ncbi.nlm.nih.gov/pubmed/21385461) which is open source under the Apache 2.0 license. The complete source code can be downloaded from https://www.labkey.org/wiki/home/Documentation/page.view?name=sourceCode

Results and Discussion

Figure 1 describes the steps and overall workflow for evaluating system performance using the Panorama AutoQC pipeline. First, a template Skyline document must be created that includes a list of peptides (approximately 10-20) from which quality metrics are collected. AutoQC Loader is then installed onto the instrument computer. The user provides the program with a local folder location where files for assessing system performance are saved (Figure 1B), and the Panorama server folder into which the data should be uploaded (Figure 1C). Upon completion of each newly acquired data file, AutoQC Loader detects its addition to the designated QC local folder and initiates the automatic import of the data into Skyline where various QC metrics are extracted. These processed results are then uploaded to the Panorama data management system. The new data is added to previously acquired QC data creating a longitudinal profile of instrument performance which can be easily viewed using the powerful SPC techniques in order to identify suboptimal intra or inter-experiment performance, and evaluate new chromatographic conditions or new technologies. An administrator can grant permissions to others to facilitate sharing of QC data, allowing others to easily view the data by logging into the Panorama data management system.

Figure 1.

Figure 1

A) Overall design of the Panorama AutoQC pipeline. A Skyline template file is first created which targets a number of well characterized peptides to monitor for system suitability. An experiment is designed with appropriate controls and peptide QC data is automatically uploaded to Panorama where performance can be assessed. B) Panorama AutoQC Loader settings require the path to a Skyline template document where QC files will be imported and the folder where QC runs will be acquired, along with the type of instrument used to acquire the data. C) Panorama settings require the URL of the Panorama server and the path to a folder on the server where data will be uploaded. Also required are the login credentials of a user who has permissions to access the folder on Panorama.

The basic layout of the Panorama AutoQC interface is outlined in Figure 2. Figure 2A displays the main navigation links which are customizable. These links can be used to view the QC data (Dashboard), the runs tab can be used to manipulate the uploaded files, add annotations, manipulate the runs (e.g., change order, delete files), annotate changes to instrumentation, establish guide sets for assessing outliers in the data, view the status of the pipeline, and view a Pareto plot which indicates the most variable metrics. In terms of establishing guide sets for creating thresholds, there is no consensus with regards to the proper number/timing/placement of these guide sets. In our laboratory we typically will use at least 10 standards to establish thresholds. Suggestions for establishing new guide sets include after a new column and/or trap are installed, after instrument calibration, after preventative maintenance, or after any hardware changes.

Figure 2.

Figure 2

A screenshot of the user interface for viewing QC data in Panorama A) The main toolbar with navigational links. B) Accepts user input to modify output in the displayed. C) Levey Jenning plots. D) Summary of uploaded QC data on multiple instruments.

The dashboard is the main page which serves as the primary user interface and displays the longitudinal data. This link may be configured to display the name of the laboratory, and identifies which instrument acquired the displayed data. Figure 2B shows the data displayed in the Levey Jenning plots (Figure 2C). The user can choose from 8 different peptide ID free metrics (figure inset) that describe both the performance of the liquid chromatography separation, MS1 full, and tandem MS scans. The capability exists to change the date range of the displayed data, view control charts by log or normal y-axis, and view individual or all peptides on the same control chart as shown in Figure 2C. Annotations are displayed directly on the control charts and denoted by color customizable “x”. A mouse hover over the “x” displays the annotation text entered on the annotation tab. Finally, the QC data are summarized in Figure 2D which displays the number of documents and files uploaded, allowing the display of pipeline status information for multiple folders at a time.

Detecting suboptimal performance

Poor reproducibility or efficiency of precursor fragmentation can have deleterious effects on peptide identification and quantitation by LC MS/MS as it decreases the abundance of fragment ions – thus decreasing the probability of spectral identification. Two main problems related to the mass spectrometer can affect fragment ion abundance: 1) Effectiveness of precursor peptide isolation and 2) Efficiency of fragmentation of the isolated peptide. To date, there are no metrics to longitudinally assess the variability in this process, though it is often qualitatively evaluated by number of peptide spectral matches or peptide identifications.

A new metric that has been implemented in the Panorama AutoQC pipeline is the ratio of the transition peak area (T area) to the precursor peak area (P area). This metric is only calculated if the scan cycles consist of an MS1 scan followed by targeted PRM scans. The metric provides new visibility into evaluating isolation and fragmentation efficiency/reproducibility. Figure 3 shows this metric for two different peptides over 6 months of data collection. A systematic trend (slope of linear regression, p< 0.05) toward lower values is apparent during the first 4 months for both peptides. After observation of this trend, data collection was stopped and basic preventative maintenance was performed as noted in the annotation. Upon resuming data collection, we observed a significant increase in the value of the metric subsequently followed by a period of steady performance (slope of linear regression, p>0.5). To identify the reason for the increase in this ratio, P area and T area were examined. The P area for both peptides were similar (<10% change) before and after maintenance; however, a significant increase in the T area (+268% and +228%) was observed.

Figure 3.

Figure 3

Early detection of suboptimal performance. The ratio of the transition to precursor peak area is shown over 6 months of data collection for two representative peptides. A significant downward trend was observed and preventative maintenance was performed followed by a subsequent increase to values above the average. The mean of the guide set, 1 sd, 2 sd, and 3 sd are represented by the gray, green, blue, and red lines, respectively. The figure is a representative screenshot of the true image displayed by Panorama; however, axes and labels were modified to improve readability.

Evaluating Technologies

The current state of high throughput proteomics is a result of enormous technological advances in the field from sample preparation, instrumentation, to bioinformatics and software. Since the number of peptide identifications is correlated with peak capacity 17 in a nanoLC MS/MS experiment, an area of intense research is to advance online separation methods to improve throughput without sacrifices to peak capacity. A potential use of the Panorama AutoQC pipeline is to evaluate the performance of different technologies in an unbiased historical context. Our laboratory was interested in the improvement of peak capacity afforded from a reduction in stationary phase particle size from the traditional 3 μm diameter size particles to 1.9 μm diameter particles. Figure 4 displays a control chart of full-width at half maximum (fwhm) of 6 peptides plotted across 3 weeks of data acquisition. The periods of time in which the LC was fitted with columns packed with either 1.9 or 3 μm C18 particles are noted in the annotation dialog boxes. Overall, we observed a 2-fold reduction in fwhm from 3.6 to 1.8 seconds by transitioning to the smaller 1.9 μm diameter particles. Theoretically, this improvement would equate to a 2-fold increase in peak capacity in complex samples.

Figure 4.

Figure 4

A control chart of fwhm (min) of 6 selected peptides. An annotation was created to note the switch to a small diameter particle (1.9 μm) in the control chart displaying fhwm. Another annotation noted the switch back to the standard size particle diameter (3 μm) used in our laboratories. The figure is a representative screenshot of the true image displayed by Panorama; however, axes and labels were modified to improve readability.

Detecting Major Sources of Variation in LC MS/MS

A major advantage of Panorama AutoQC is the capability to store and visualize QC data in a longitudinal fashion which allows one to determine the range of technical performance over time. Figure 5 displays data from 992 standards, representing 48,000 combined metrics, collected over more than two years of peptide analysis time on a quadrupole orbitrap coupled to a nanoLC. The control charts from a single peptide across the (A) retention time, (B) peak area, and (C) fwhm are displayed. For this analysis, the guide sets, the range of runs used to determine the mean and standard deviation of each metric, were established during the first 4 months of data acquisition to account for the inherent variability in the use of columns with different lengths, emitter placement, and other uncontrollable factors (e.g., variations in lab temperature and humidity). Chromatographic parameters were found to be reproducible (retention time and peak full width at half maximum %RSD = 9.98% and 25.08%, respectively); however, outliers and even gross errors are readily apparent from the control charts. Integrated peptide intensities were more variable (peak area %RSD = 57.74%). The Pareto plot summarizes the variability in the 8 metrics over the time period and indicates over 50% of the outliers, points falling greater than 3 standard deviations from the mean, are attributed to the isolation/fragmentation event (i.e, transition area and transition/precursor ratio). These data are not surprising, as the MS2 scan tends to be more variable simply due to its complexity and its dependence on the efficiency of isolation and dissociation which are affected by multiple parameters (e.g., calibration, cleanliness, collisional energy, etc.). Mass measurement accuracy (MMA) and chromatographic peak width (fwhm) were reproducible and represented only 8% of the total number of non-conformers.

Figure 5.

Figure 5

Assessment of longitudinal performance of a single peptide in terms of A) RT; B) peak area; and C) fwhm. D) Pareto analysis identifying the major sources of variation over 28 months of data acquisition in proteomic type nanoLC tandem mass spectrometry experiments. T Area = sum of integrated areas of all transitions. P Area = sum of integrated areas of precursors (M, M+1, and M+2). The mean of the guide set, 1 sd, 2 sd, and 3 sd are represented by the gray, green, blue, and red lines, respectively. The figure is a representative screenshot of the true image displayed by Panorama; however, axes and labels were modified to improve readability.

Thus far, the Panorama AutoQC pipeline has been used in a reactionary manner in which peptide QC data are collected, plotted, and deviations from previous data are noted on an inspection basis. If the variations seem to be related to special causes, then one can stop data collection and investigate. However, the power of SPC lies not only in its emphasis on systematic evaluation and early detection, but in the detection and minimization of major sources of variation such that the quality (i.e., robustness) is continually improved upon. Prior to these realizations, suitable standards combined with sensitive and specific metrics must be tracked in a longitudinal manner. The capabilities of Panorama AutoQC combined with its seamless integration into existing proteomic workflows offers a powerful approach for routine implementation of system suitability procedures, evaluation of the effectiveness of new technologies, and a thorough understanding of the major sources of variation in peptide mass spectral data. Ultimately, we envision this pipeline as an initial step in implementing more advanced SPC techniques and moving the field towards a more preemptive-based approach to experiments – analogous to the Quality by Design paradigm suggested by others.3

Conclusions

Panorama AutoQC is a versatile, vendor neutral pipeline that is used to assess instrument performance in LC MS/MS based proteomic experiments. It visualizes fundamental analytical figures of merit extracted from targeted peptides using tools founded in SPC. The pipeline has been used to investigate/evaluate different scenarios encountered in our laboratory. Future research will implement multivariate techniques for data summary and evaluation of new ways to establish empirical thresholds based on historical performance. Ultimately, we hope the continued research into overall quality control in proteomics experimentation, from the development of new standards, to the exploration of new metrics and visualization tools, will aide in the coalescence around a single procedure to be used by the proteomics field. We believe that Panorama AutoQC with its ease of adoption and automated collection of system suitability metrics into a central repository will accelerate progress in this area of research. Data from this manuscript, including the peptides monitored, are publically available (www.tinyurl.com/autoQC-data).

Acknowledgments

MSB is thankful for startup funds provided by NC State University and support from the Center for Human Health and Environment (P30 ES025128). This work was supported in part by National Institutes of Health grants R01 GM103551 and P41 GM103533. Authors also acknowledge support from the Panorama Partners Program.

Footnotes

Conflict of Interest: The authors declare no conflict of interest

References

  • 1.Bereman MS. Tools for monitoring system suitability in LC MS/MS centric proteomic experiments. Proteomics. 2015;15(5-6):891–902. doi: 10.1002/pmic.201400373. [DOI] [PubMed] [Google Scholar]
  • 2.Taylor RM, Dance J, Taylor RJ, Prince JT. Metriculator: quality assessment for mass spectrometry-based proteomics. Bioinformatics. 2013;29(22):2948–9. doi: 10.1093/bioinformatics/btt510. [DOI] [PubMed] [Google Scholar]
  • 3.Tabb DL. Quality assessment for clinical proteomics. Clinical Biochemistry. 2013;46(6):411–420. doi: 10.1016/j.clinbiochem.2012.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ma ZQ, Polzin KO, Dasari S, Charnbers MC, Schilling B, Gibson BW, Tran BQ, Vega-Montoto L, Liebler DC, Tabb DL. QuaMeter: Multivendor Performance Metrics for LC-MS/MS Proteomics Instrumentation. Analytical Chemistry. 2012;84(14):5845–5850. doi: 10.1021/ac300629p. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Pichler P, Mazanek M, Dusberger F, Weilnbock L, Huber CG, Stingl C, Luider TM, Straube WL, Kocher T, Mechtler K. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC-MS Performance Metrics on Orbitrap Instruments. Journal of Proteome Research. 2012;11(11):5540–5547. doi: 10.1021/pr300163u. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kocher T, Pichler P, Swart R, Mechtler K. Quality control in LC-MS/MS. Proteomics. 2011;11(6):1026–1030. doi: 10.1002/pmic.201000578. [DOI] [PubMed] [Google Scholar]
  • 7.Matzke MM, Waters KM, Metz TO, Jacobs JM, Sims AC, Baric RS, Pounds JG, Webb-Robertson BJ. Improved quality control processing of peptide-centric LC-MS proteomics data. Bioinformatics. 2011;27(20):2866–72. doi: 10.1093/bioinformatics/btr479. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Beri J, Rosenblatt MM, Strauss E, Urh M, Bereman MS. Reagent for Evaluating Liquid Chromatography–Tandem Mass Spectrometry (LC-MS/MS) Performance in Bottom-Up Proteomic Experiments. Analytical Chemistry. 2015;87(23):11635–11640. doi: 10.1021/acs.analchem.5b04121. [DOI] [PubMed] [Google Scholar]
  • 9.Westgard JO. Statistical Quality Control Procedures. Clinics in Laboratory Medicine. 2013;33(1):111–124. doi: 10.1016/j.cll.2012.10.004. [DOI] [PubMed] [Google Scholar]
  • 10.Eggert AA, Westgard JO, Barry PL, Emmerich KA. Implementation of a multirule, multistage quality control program in a clinical laboratory computer system. J Med Syst. 1987;11(6):391–411. doi: 10.1007/BF00993007. [DOI] [PubMed] [Google Scholar]
  • 11.Bereman MS, Johnson R, Bollinger J, Boss Y, Shulman N, MacLean B, Hoofnagle AN, MacCoss MJ. Implementation of statistical process control for proteomic experiments via LC MS/MS. J Am Soc Mass Spectrom. 2014;25(4):581–7. doi: 10.1007/s13361-013-0824-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bramwell D. An introduction to statistical process control in research proteomics. Journal of Proteomics. 2013;95(0):3–21. doi: 10.1016/j.jprot.2013.06.010. [DOI] [PubMed] [Google Scholar]
  • 13.Jackson D, Bramwell D. Application of clinical assay quality control (QC) to multivariate proteomics data: a workflow exemplified by 2-DE QC. J Proteomics. 2013;95:22–37. doi: 10.1016/j.jprot.2013.07.025. [DOI] [PubMed] [Google Scholar]
  • 14.Broudy D, Killeen T, Choi M, Shulman N, Mani DR, Abbatiello SE, Mani D, Ahmad R, Sahu AK, Schilling B, Tamura K, Boss Y, Sharma V, Gibson BW, Carr SA, Vitek O, MacCoss MJ, MacLean B. A framework for installable external tools in Skyline. Bioinformatics. 2014;30(17):2521–3. doi: 10.1093/bioinformatics/btu148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.MacLean B, Tomazela DM, Shulman N, Chambers M, Finney GL, Frewen B, Kern R, Tabb DL, Liebler DC, MacCoss MJ. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics. 2010;26(7):966–8. doi: 10.1093/bioinformatics/btq054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sharma V, Eckels J, Taylor GK, Shulman NJ, Stergachis AB, Joyner SA, Yan P, Whiteaker JR, Halusa GN, Schilling B, Gibson BW, Colangelo CM, Paulovich AG, Carr SA, Jaffe JD, MacCoss MJ, MacLean B. Panorama: A Targeted Proteomics Knowledge Base. Journal of Proteome Research. 2014;13(9):4205–4210. doi: 10.1021/pr5006636. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hsieh EJ, Bereman MS, Durand S, Valaskovic GA, MacCoss MJ. Effects of Column and Gradient Lengths on Peak Capacity and Peptide Identification in Nanoflow LC-MS/MS of Complex Proteomic Samples. Journal of the American Society for Mass Spectrometry. 2013;24(1):148–153. doi: 10.1007/s13361-012-0508-6. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES