Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 4.
Published in final edited form as: Immunity. 2009 Oct 16;31(4):527–528. doi: 10.1016/j.immuni.2009.09.007

“MIATA”—Minimal Information about T Cell Assays

Sylvia Janetzki 1, Cedrik M Britten 2, Michael Kalos 3, Hyam I Levitsky 4, Holden T Maecker 5, Cornelius JM Melief 6, Lloyd J Old 7, Pedro Romero 8, Axel Hoos 9, Mark M Davis 5,10,*
PMCID: PMC3762500  NIHMSID: NIHMS503851  PMID: 19833080

Immunotherapy, especially therapeutic vaccination, has a great deal of potential in the treatment of cancer and certain infectious diseases such as HIV (Allison et al., 2006; Fauci et al., 2008; Feldmann and Steinman, 2005). Numerous vaccine candidates have been tested in patients with a variety of tumor types and chronic viral diseases. Often, the best way to assess the clinical potential of these vaccines is to monitor the induced T cell response, and yet there are currently no standards for reporting these results. This letter is an effort to address this problem.

In particular, various T cell assays have been developed to try to predict vaccination success or failure and to ultimately serve as biomarkers and subsequent clinical endpoints in immunotherapy trials (Hoos et al., 2007). Over the years, a few assays have emerged that have been widely used, such as ELISPOT, peptide-MHC multimer, and intracellular cytokine staining (ICS). Yet, these “classical” assays continue to be modified on a regular basis as additional parameters and innovations in technology are introduced (Chattopadhyay et al., 2008). Further, an increasing number of innovative therapies and compounds are moving into clinical trials, and novel monitoring techniques are evolving in step. Because each new modification has the potential to substantially change the performance characteristics of a test, results obtained by different centers with slightly modified assays are likely to vary in ways that are impossible to predict. This compounds the variations in assay results from facility to facility, as revealed by the outcome of recent multi-institutional studies for ELISPOT, peptide-MHC multimer, and ICS assays (Figure S1 available online). The high degree in variability makes the comparison between any two labs becomes a game of chance. Additional factors that can dramatically impact the performance characteristics of assay protocols are related to the analysis, interpretation, and reporting of data and the extent of quality measures taken to ensure data integrity. To date, no formal quality-control standards exist for laboratories performing such studies, nor does any requirement for reporting on such controls. This is in contrast to other, non-immunological types of assays, such as gene expression arrays.

The importance and influence of these parameters is currently being evaluated through large ongoing collective initiatives with the goal to enhance bioassay standardization, validation, and harmonization (Britten et al., 2008). Dedicated conferences and formal sessions at large international meetings underline the increasing focus on immune-monitoring techniques and will help to establish these assays as reliable tools to systematically dissect immune responses in humans and create a reliable foundation for immunotherapy (Davis, 2008).

An area that remains unaddressed by currently published recommendations and guidelines relates to the establishment of a reporting framework for immune-monitoring assays. It is clear from the discussion above that the absence of specific information on both the assay procedure as well as the quality infrastructure of individual laboratories makes it difficult to interpret results objectively. In only a painfully small number of current publications do the authors provide sufficient transparency on these points. Hence, it is not surprising that results are often regarded with skepticism.

In this letter, we introduce the project on “Minimal Information about T Cell Assays” (MIATA) with recommendations on the minimum specific information required for a more objective assessment of both the quality of the laboratory infrastructure and the assays themselves, to allow a thorough interpretation of results from immunological T cell assays. With this project's introduction, we invite the scientific community to participate in a public consultation phase. A comprehensive draft document is posted and accessible online (www.miataproject.org). Feedback will be collected and incorporated during the yearlong public consultation period and formally discussed in various meetings in order to reach a consensus document on what the reporting requirements should be. With the assumption that a broad consensus can be reached, these guidelines will become the framework for a public database of immune-monitoring data generated in the relevant clinical trials.

We wish to emphasize that MIATA is not intended to impose any particular assays or assay standards because this would be an obstacle to innovation and improvement of immune-monitoring technologies. What we wish to define is the content and structure of information reported from immunoassays. Although reporting frameworks will not per se decrease the interlaboratory variability found across institutions, they will introduce more transparency to published data and will allow a reader to judge the quality of reported results, a scenario that will improve the ability to interpret immune-monitoring results and aid in their adoption.

The current lack of a frameset for reporting data from T cell assays should also be seen in the context of an increasing number of minimal information projects that have emerged during the last years. Many of these are listed under the project on Minimum Information for Biological and Biomedical Investigations (MIBBI) (Taylor et al., 2008). The presently developed guidelines aim for two major goals: first, to annotate data to such an extent that they give transparent evidence on the quality, reliability, and possible error sources of reported results; and second, to use the reporting standards to supply public databases with more interpretable results (Brazma et al., 2000).

Although some published guidelines touch upon certain aspects of the MIATA project, they do not fully address the unique features and peculiarities of T cell-based immunoassays, and no public database for clinical immune-monitoring data yet exists.

The initial MIATA guideline draft is based on data and reports previously published and focuses solely on those variables that have been shown to influence the outcome of the three most commonly used T cell-based immunoassays: ELISPOT, ICS, and peptide-MHC multimer staining.

The proposed initial publication guidelines of immune-monitoring strategies are based on the premise that reports of immune-monitoring data generated by the mentioned assays should accurately include information on and description of five important components that determine the testing outcome:

  1. The sample, including information about the donor or patient (such as age, gender, human leukocyte antigen (HLA) type, disease, treatment, and comedication as far as appropriate), techniques used for obtaining, processing, freezing, transporting, and storing the specimen tested, as well as the way frozen samples were thawed and quality tested before use in the assay;

  2. The assay, including crucial assay techniques, reagents, materials, and equipment assuring transparency of all assay conditions and, if applicable, a statement of how publicly available assay recommendations were followed;

  3. The data acquisition, including the machines and software used for data acquisition, their setup and calibration, as well as the strategy to analyze the specific events or spots;

  4. The results, including statistical or empirical response criteria, comments on the considered biological significance of the obtained results, and comments on whether the results found can be correlated with a specific disease stage or treatment;

  5. The environment in which laboratory operations are conducted, including the status of the use of standard operating procedures (SOPs), qualification and/or validation, compliance with good laboratory practice (GLP), training status of personnel performing tests, quality-control mechanisms, and responsibilities, as well as auditing processes and reporting of unexpected events.

Detailed annotation of published data for these five modules should be accompanied by the reporting of raw data whenever this is needed to guarantee full understanding of the value of the data set.

A transparent description of procedures and quality measures applied during immune monitoring is an important step toward achieving proper scientific recognition of reported data, whether done in a basic research setting or during clinical vaccine testing. Furthermore, the reporting of minimal information about immunomonitoring assays will provide an important tool for achieving comparability of results across laboratories and institutions.

All of our colleagues are invited to actively contribute to the draft guidelines during the public consultation process. We are aiming at an intensive vetting process that is expected to lead to substantial changes and a stepwise increase of quality. To reach this goal, we will actively promote the guidelines and involve organizations and groups as well as other leaders in the immune-monitoring field, editors of scientific journals, and representatives of regulatory agencies in a process aiming at reaching a broad consensus for next versions of the guidelines.

The final goal of our project is to establish a well-vetted and commonly acceptable reporting standard for immunomonitoring results that could be used to systematically fill an openly accessible public database. Such a searchable public repository would be elemental for the scientific community and drug developers because results could be subjected to comparative analyses within the context of additional findings reported by colleagues working in a similar setting.

Supplementary Material

Figure S1. Interlaboratory Variability in Immune Response Measurements

Data were obtained in Proficiency Panels conducted by the Cancer Vaccine Consortium. ELISPOT and ICS data represent the CMVpp65 peptide pool response in donors representing a moderate (ELISPOT) or a high responder (ICS). The HLA-peptide multimer staining data were obtained from a weak responder to the HLA-A2-restricted Influenza M158-66 peptide. The variability shown for the three response levels is representative for all three assays. Comparisons across assays cannot be done because of the use of different donors with different response levels. However, the data show persistently high CVs in all three types of assays, compared to CVs seen in intralaboratory reproducibility studies (Maecker et al., 2008).

Acknowledgments

S.J. is founder and president of ZellNet Consulting. C.M.B. is a part-time employee at BioNTech AG. H.L. serves on the Board of Directors of Antigenics and on the Scientific Advisory Board of Celldex. C.J.M.M.is part-time employed by Immune System Activation (ISA) and possesses stock appreciation rights. A.H. is employed by Bristol-Myers Squibb.

Footnotes

Supplemental data: Supplemental Data include one figure and can be found with this article online at http://www.cell.com/immunity/supplemental/S1074-7613(09)00419-1.

References

  1. Allison, J.P., Dranoff, G., and Frederick, W. (2006) (San Diego: Elsevier Academic Press).
  2. Fauci AS, Johnston MI, Dieffenbach CW, Burton DR, Hammer SM, Hoxie JA, Martin M, Overbaugh J, Watkins DI, Mahmoud A, et al. Science. 2008;321:530–532. doi: 10.1126/science.1161000. [DOI] [PubMed] [Google Scholar]
  3. Feldmann M, Steinman L. Nature. 2005;435:612–619. doi: 10.1038/nature03727. [DOI] [PubMed] [Google Scholar]
  4. Hoos A, Parmiani G, Hege K, Sznol M, Loibner H, Eggermont A, Urba W, Blumenstein B, Sacks N, Keilholz U, et al. J Immunother. 2007;30:1–15. doi: 10.1097/01.cji.0000211341.88835.ae. [DOI] [PubMed] [Google Scholar]
  5. Chattopadhyay PK, Hogerkorp CM, Roederer M. Immunology. 2008;125:441–449. doi: 10.1111/j.1365-2567.2008.02989.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Britten CM, Janetzki S, van der Burg SH, Gouttefangeas C, Hoos A. Cancer Immunol Immunother. 2008;57:285–288. doi: 10.1007/s00262-007-0379-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Davis MM. Immunity. 2008;29:835–838. doi: 10.1016/j.immuni.2008.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Taylor CF, Field D, Sansone SA, Aerts J, Apweiler R, Ashburner M, Ball CA, Binz PA, Bogue M, Booth T, et al. Nat Biotechnol. 2008;26:889–896. doi: 10.1038/nbt.1411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Brazma A, Robinson A, Cameron G, Ashburner M. Nature. 2000;403:699–700. doi: 10.1038/35001676. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figure S1. Interlaboratory Variability in Immune Response Measurements

Data were obtained in Proficiency Panels conducted by the Cancer Vaccine Consortium. ELISPOT and ICS data represent the CMVpp65 peptide pool response in donors representing a moderate (ELISPOT) or a high responder (ICS). The HLA-peptide multimer staining data were obtained from a weak responder to the HLA-A2-restricted Influenza M158-66 peptide. The variability shown for the three response levels is representative for all three assays. Comparisons across assays cannot be done because of the use of different donors with different response levels. However, the data show persistently high CVs in all three types of assays, compared to CVs seen in intralaboratory reproducibility studies (Maecker et al., 2008).

RESOURCES