Skip to main content
Analytical Chemistry Insights logoLink to Analytical Chemistry Insights
. 2018 Feb 8;13:1177390118757462. doi: 10.1177/1177390118757462

Ten Basic Rules of Antibody Validation

Michael G Weller 1,
PMCID: PMC5813849  PMID: 29467569

Abstract

The quality of research antibodies is an issue for decades. Although several papers have been published to improve the situation, their impact seems to be limited. This publication makes the effort to simplify the description of validation criteria in a way that the occasional antibody user is able to assess the validation level of an immunochemical reagent. A simple, 1-page checklist is supplied for the practical application of these criteria.

Keywords: Replication, reproducibility, poor science, open science, quality control

Introduction: The Replication Crisis

More and more evidence is presented1,2 that a large fraction of the research money is wasted every year (about 85% of the worldwide research investment, which is equivalent to US$200 billion in 2010) due to “poor science”3 and the use of substandard reagents.47 Several eye-opening studies8,9 have been published in this context, which are highly recommended for any active scientist, funding agency, and publisher. These issues should not be reiterated here. The discussion is also conducted under the term “replication crisis” because the most critical point of the issue seems to be the lack of reproducibility of a huge number of “high-impact” studies, which raises the question, whether these papers can be considered to be a contribution to science at all.

The quality of diagnostic and research antibodies10-17 seems to be an extremely difficult and persisting problem. In between, several activities have been started to improve the situation and some very useful suggestions have been made and published.1835 In addition, many companies have discovered the issue and offer reagents with enhanced validation features. Also, some journals already toughened their editorial policies.36 Unfortunately, these measures did not resolve the issue, yet. One reason for this noncompliance might be caused by the complexity of many of these documents or regulations, which are for the expert or insider only and too confusing to be directly applied from the average bioanalytical user or referee. Therefore, a validation gap seems to exist between the core of the community and the novices or occasional antibody users. This series of 10 rules should facilitate to check quickly the level of validation of an antibody protocol and to make a simple risk assessment, to avoid catastrophic quality issues, which might jeopardize a project or damage the reputation of the researchers involved. In addition, these rules might be a basis for referees of scientific journals to check quickly whether a sufficient antibody validation level has been presented to support a scientific publication.

Some General Thoughts

One of the general weak points in many papers or reports is the lack of experimental detail. In more than 50% (!) of all respective publications, the used antibody cannot be identified at all. From my point of view, these papers should be nullified—in other words, withdrawn. Decades ago, when printed pages were expensive, a concise protocol might have had some justification. However, in our digital times, there is no reason at all to omit any information, which might be helpful to replicate the work. A fundamental reorientation of evaluation workflows and assessment rules is necessary.

Furthermore, it seems to be counterproductive to focus mainly on “novelty” or even “public attention” in most journals by the progressive negligence of quality and validation. Careful replication, characterization, or optimization of any technology or other work has such a low appreciation that it is nearly impossible to publish it in any scientific journal. Implicitly derogatory or dissuasive statements can be found in most “Guidelines for Referees.” As long as the technical quality of a scientific work is not sufficiently acknowledged as a value on its own, the situation might not improve considerably. However, these 10 rules might be seen as completely superfluous, if traditional, and perhaps self-evident scientific principles (the “scientific method”) would be respected.

Validation in General and in the Antibody Context

The term “validation” is used in different fields and many, sometimes conflicting, definitions exist.37,38 For immunoassay or ligand-binding assay validation, more specific recommendations have been published.3942 To keep it simple, in this article, validation should be understood only in the specific area of diagnostic or analytical antibodies. It can be defined as follows: Validation is the experimental proof and documentation that a specific antibody is suitable for an intended application or purpose. Hence, it refers to a (bio)chemical compound in connection with a protocol or process. As many of the following rules might be unclear or superfluous for the nonexperienced antibody user, some additional explanations have been added in the “Why” notes, to illustrate, what might happen, when the rule is violated or ignored.

The 10 Rules

Rule #1: Definition of the antibody*

The binding molecule, most often an antibody, needs to be identifiable in an unambiguous way. For this purpose, a clone number, commercial product number, antibody ID, or equivalent is suitable (https://scicrunch.org/resources). An even better definition is the structure of the antibody, mainly defined by the amino acid sequence. In addition, it should be explicitly noted, whether the binder is a polyclonal or a monoclonal antibody. In the case of a polyclonal serum, a clone number or sequence is not available. At least the species (eg, rabbit) and a batch number or sampling day should be given to differentiate several serum lots.

Why: If this information is not given, the work cannot be reproduced and hence cannot be regarded as a scientific study. Sometimes the replication of experiments even fails in the same research group.

Rule #2: Definition of the target*

Probably, the most important property of an antibody is its ability to bind selectively to another molecule, termed antigen, which is the analyte in most cases. The antigen needs to be defined as exact as possible, not only as a vague group of related compounds if this group selectivity is not explicitly wanted. In the case of haptens (small molecules), the immunogen structure should be given as a chemical formula including the linker and the orientation of the hapten. In the case of larger molecules, such as proteins, the epitope (targeted area) should be given as precise as possible. In the case of histochemical targets, a corresponding gene or sometimes only a cell type, tissue or organ is defined as the target. In addition, it is very important, whether the target protein is in the native form or denatured or even chemically modified, as in the case of formaldehyde-treated samples. A chemical structure definition should be preferred, if possible.

Why: If the target is not defined properly, it cannot be decided, whether the binder is selective enough or not. Confusion is often caused if comparisons with other, sometimes nonimmunochemical, methods are performed and a correlation cannot be established. A poor specification of the target analyte is the most frequent reason for the complete failure of an immunoassay, eg, if the analyte is a native protein in serum and the antibody is targeted against a completely denatured form, which is typical for Western blots. Also, homology problems can only be accounted for, if the epitope is well defined.

Rule #3: Binding selectivity (“crossreactivity”)*

Crossreactivity (CR) is a relatively complex property and is often misinterpreted. Many researchers implicitly use a definition attributed to Abraham, often without the knowledge of his papers,43,44 in which the ratio of two 50% inhibition concentrations has been used. Considering that the IC50 value is related to the affinity constant of an antibody/antigen complex, the CR according to Abraham can be interpreted as a relative affinity constant if the CR is calculated on a molar basis. In this context, it may be interesting to know that a CR can be calculated on a molar, volume, or weight basis, from which the latter is by far the most frequently used one. If the concentrations of structurally related compounds in a representative sample are known approximately, CR data are very useful to estimate the probability of the respective interference. However, there are quite a few obstacles to use CR data in a quantitative way: Often researchers are not aware that CRs are not clearly defined in mixtures of related compounds and in the case of nonparallel calibration curves. Crossreactivities are not directly comparable in other—even slightly different—assay formats. Often it is assumed that monoclonal antibodies display fewer CRs than polyclonal ones, which is not true in many cases. In addition, many analytical chemists not very familiar with immunochemistry assume that CRs are a unique characteristic of immunoassays, which proves their unreliable behavior. However, nearly equivalent properties exist in nearly all analytical fields under different designations, such as “response factor,” “calibration factor,” “ionization efficiency,” “molar absorptivity,” “quantum yield,” and many more. Also important is the fact that a reference substance (CR = 100%) can be chosen arbitrarily and does not need to be the substance of the highest CR or affinity. Hence, CRs >100% can occur sometimes and are not an indication of any specific issue. In the context of antibody validation, it is important that all relevant substances should have been tested for CR. Relevancy should be assessed in the context of the samples to be measured: Which crossreactants have to be expected in which concentrations? Sometimes unknown crossreactants may be present in samples. This can be evaluated, eg, with a hyphenation of a separation technique with an immunoassay.45,46 Unexpected peaks can be examined in more detail by high-resolution mass spectrometry to uncover the unknown crossreactant. This approach has a big advantage in relation to the “substance group” approach: even structurally nonrelated crossreactants can be discovered. Another interesting approach is the use of immunoprecipitation followed by mass spectrometry.

Why: The more CR data are available for an antibody or immunoassay, the better you can evaluate the usability in a specific application context. In general, a higher number of CR data are a good indication of a carefully characterized antibody. If no CR data are given or only CRAnalyte = 100% is given, the antibody was not validated at all. It should be regarded as an experimental reagent. Any validation is then left to the user.

Rule #4: Concentrations of antibodies and additives*

Antibody concentrations are relatively difficult to determine, eg, by surface plasmon resonance (SPR). As a substitute, many companies determine either a protein concentration with a nonselective method, such as UV at 280 nm, Bradford assay, bicinchoninic acid, or amino acid analysis47 (comprising impurities such as albumin and nonrelated immunoglobulin G [IgG]), or an IgG concentration which is determined by enzyme-linked immunosorbent assay (ELISA), which still may include some nonrelated IgG. The fraction of selective but inactive antibody is nearly always unknown. In addition, any inherent heterogeneity of the preparations, such as of posttranslational modifications (eg, glycosylations) and the presence of isoforms and aging products (eg, methionine oxidation), is rarely examined. Hence, it has to be concluded that the “antibody concentration” given on a product label is only a preliminary estimate for assay optimization. The concentration of selective and active antibody might be much lower. Perhaps, one of the most confusing declarations is the specification “affinity purified.” In most cases, it means only purification by protein A or G, which binds selectively to IgG. A real affinity purification against the antigen is rarely performed and should be stated accordingly. Furthermore, it has to be considered that stabilizing additives might have been put deliberately into the reagent. Particularly, bovine serum albumin and buffer salts, such as Tris, are of relevance. These amino-containing substances might complicate the conjugation or labeling of antibodies. In a protocol, it should be exactly defined which kind of concentration determination has been performed and which additives/stabilizers are present in the product.

Why: For most practical uses, an optimal concentration of an antibody needs to be applied. Low concentrations lead to weak signals and high concentrations to nonspecific interactions. Hence, the concentration of active antibody in the reagent preparation should be known. Any wrong information about antibody concentration might make the development or optimization of the much more difficult and laborious. Unknown additives and buffers can prevent conjugation reactions completely.

Rule #5: Documentation*

All information on the aspects mentioned above should be documented properly in a data sheet. Leaflets, which are nearly empty and only contain the name of the reagent and the order number, are a strong indication of poor antibody validation. Such antibodies may be useful in a context, where the screening of a binder is a major task. However, when a reliable reagent is needed, such products should be avoided.

Why: Reagents without proper documentation might lead to a waste of time and money. Sometimes it is better to develop your own, well-documented antibody, as to mess around with a product, from which nearly nothing is known and finally will be found as unsuitable.

Rule #6: Binding strength of antibody/antigen complex

A very insightful information is the “binding strength” of the antibody/antigen complex, generally termed “affinity constant” or “binding constant,” which is usually defined and calculated based on the law of mass action. This constant can be determined or estimated by SPR,48 ELISA,49,50 equilibrium dialysis, or many other methods. It has to be considered that IgG molecules possess 2 binding sites and hence can bind bivalently. These multivalent complexes show a much higher “apparent affinity,” termed avidity. Depending on the application, either the “true” monovalent affinity constant or the avidity may be more relevant.

Why: There is a huge difference between a weakly binding and a high-affinity antibody. In competitive immunoassays, the affinity constant is defining the sensitivity (detection limit) of the assay, which may be crucial for the respective application. In addition, low-affinity antibodies often lead to weak signals, nonspecific background, and high costs due to the high concentrations needed for the assays. High-affinity antibodies are “no-trouble reagents” in many experiments and should be preferred for nearly all applications.

Rule #7: Influence of nontarget substances (“matrix effect”)

The same definition of Abraham can also be applied to matrix components such as salts, humic acids, preservatives, solvents, and other additives.51 The major difference is the more or less nonspecific mode of action and hence the usually much higher concentrations which are relevant. Unfortunately, this approach is rarely applied yet, but can be very useful to examine the robustness of an immunochemical reagent.

Why: Antibodies, which are very sensitive to matrix compounds, lead to very unstable assays and usually are not fit for a practical application. Early tests of this kind are highly recommended.

Rule #8: Stabilization and storage

Unfortunately, antibodies can be capricious reagents and need individual care. Therefore, not all stabilization and storage protocols are applicable to all antibodies. Validation of stability under specified conditions is very helpful if an antibody should be used for a longer time.

Why: Considering the high cost of many antibody reagents, poor stability can lead to a significant financial burden and frustration. Information about the storage conditions of a specific (!) antibody is a good indication of professional product development and quality control.

Rule #9: Application protocols

Application sheets are useful sources of information for the applicability of an antibody in a specific field. This increases the chance of a successful implementation of an immunoassay considerably. In addition, the number of available application protocols is a good documentation of practical validation efforts. One of the most important information which can be taken from application sheets is whether the antibody has been applied in the intended sample matrix such as serum, urine, or surface water.

Why: Lacking application protocols should alert the prospective user. Often, not a single successful assay was established, yet.

Rule #10: User feedback (“Open Science”)

Very good additions to application protocols are ratings and comments from users, who already purchased the antibody and used it in their specific applications. The direct and unlimited exchange of user data and experience is a good example of “Open Science,” which is an extremely powerful approach to accelerate and improve scientific progress. Some novel platforms seem to emerge (eg, https://www.antibodypedia.com; http://pabmabs.com/wordpress/; https://antibodyplus.com/customer-reviews/; https://www.antybuddy.com/). Antibodies, which have been applied successfully in plenty of projects, are much more likely to work in a similar prospective experiment. Companies also should disclose the number of samples of a specific antibody sold already. They also should refrain from deleting critical reports.

Why: If there are several (positive) feedback reports available, the antibody seems to have gained some acceptance in the community. Companies being afraid of user feedback may have an insufficient quality control system. Potential clients should acknowledge transparency (“No information, no purchase”).

Conclusions and Recommendations

In this article, 10 fundamental rules are defined, which can be the basis of the assessment of antibody validation. To make it as easy as possible, particularly for the less experienced antibody user, a 1-page form has been developed (see Supplementary Materials) as a checklist to evaluate the validation level of any commercial antibody. If one or more of 5 mandatory rules (marked with a *) are not met, the use of the respective antibody or binding reagent is discouraged for any scientific work without additional validation effort. In this article, specific analytical methods for antibody characterization are only given as examples; hence, validation is very application dependent. In addition, the development of novel methods for antibody evaluation is progressing fast. Many new methods are emerging from the field of therapeutic antibodies, which now gained a huge commercial significance. The checklist should improve the basic quality level of diagnostic or more general analytic antibody use. Project proposals, manuscript submissions, interim, or final reports containing work with analytical antibodies, all may be assessed according to these rules. However, it should be clear that this list is only a first step for more elaborated and more specific assessment protocols. Nevertheless, any attempt to weaken these basic rules would protract the “replication crisis” for an unforeseeable time and lead to more poor science and unacceptable waste of resources.

Supplemental Material

uomf_january_3_2018_checklist_antibody_validation_(1) – Supplemental material for Ten Basic Rules of Antibody Validation

Supplemental material, uomf_january_3_2018_checklist_antibody_validation_(1) for Ten Basic Rules of Antibody Validation by Michael G Weller in Analytical Chemistry Insights

Supplementary Material

Supplementary material
*

A rule which is obligatory for basic antibody validation.

Footnotes

Funding:The author(s) received no financial support for the research, authorship, and/or publication of this article.

Declaration of conflicting interests:The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  • 1. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–89. [DOI] [PubMed] [Google Scholar]
  • 2. Macleod MR, Michie S, Roberts I, et al. Biomedical research increasing value, reducing waste. Lancet. 2014;383:101–104. [DOI] [PubMed] [Google Scholar]
  • 3. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2:696–701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol. 2015;13:e1002165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Ma SS, Henry CE, Llamosas E, et al. Validation of specificity of antibodies for immunohistochemistry: the case of ROR2. Virchows Arch. 2017;470:99–108. [DOI] [PubMed] [Google Scholar]
  • 6. Andersson S, Sundberg M, Pristovsek N, et al. Insufficient antibody validation challenges oestrogen receptor beta research. Nature Comm. 2017;8:15840. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Garg BK, Loring RH. Evaluating commercially available antibodies for Rat 7, nicotinic acetylcholine receptors. J Histochem Cytochem. 2017;65:499–512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Begley CG, Ellis LM. Raise standards for preclinical cancer research. Nature. 2012;483:531–533. [DOI] [PubMed] [Google Scholar]
  • 9. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Disc. 2011;10:712. [DOI] [PubMed] [Google Scholar]
  • 10. Vasilevsky NA, Brush MH, Paddock H, et al. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Weller MG. Quality issues of research antibodies. Anal Chem Insights. 2016;11:21–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Saper CB, Sawchenko PE. Magic peptides magic antibodies: guidelines for appropriate controls for immunohistochemistry. J Comp Neurol. 2003;465:161–163. [DOI] [PubMed] [Google Scholar]
  • 13. Saper CB. An open letter to our readers on the use of antibodies. J Comp Neurol. 2005;493:477–478. [DOI] [PubMed] [Google Scholar]
  • 14. Baker M. Antibody anarchy: a call to order. Nature. 2015;527:545–551. [DOI] [PubMed] [Google Scholar]
  • 15. Baker M. Reproducibility crisis: blame it on the antibodies. Nature. 2015;521:274–276. [DOI] [PubMed] [Google Scholar]
  • 16. Voskuil J. Commercial antibodies their validation. F1000Res. 2014;3:232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Prassas I, Brinc D, Farkona S, et al. False biomarker discovery due to reactivity of a commercial ELISA for CUZD1 with cancer antigen CA125. Clin Chem. 2014;60:381–388. [DOI] [PubMed] [Google Scholar]
  • 18. Voskuil JL. The challenges with the validation of research antibodies. F1000Res. 2017;6:161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Polakiewicz RD. Antibodies: the solution is validation. Nature. 2015;518:483. [DOI] [PubMed] [Google Scholar]
  • 20. Freedman LP. Antibodies: validate recombinants too. Nature. 2015;518:483. [DOI] [PubMed] [Google Scholar]
  • 21. Uhlen M, Bandrowski A, Carr S, et al. A proposal for validation of antibodies. Nature Met. 2016;13:823–827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Uhlen M. Should we ignore western blots when selecting antibodies for other applications? Nature Met. 2017;14:215–216. [DOI] [PubMed] [Google Scholar]
  • 23. Mullane K, Williams M. Enhancing reproducibility: failures from reproducibility initiatives underline core challenges. Biochem Pharmacol. 2017;138:7–18. [DOI] [PubMed] [Google Scholar]
  • 24. Begley CG, Ioannidis JPA. Reproducibility in science improving the standard for basic and preclinical research. Circulat Res. 2015;116:116–126. [DOI] [PubMed] [Google Scholar]
  • 25. Dove A. Agreeable antibodies: antibody validation challenges and solutions. Science. 2017;357:1165–1167. [Google Scholar]
  • 26.Collaboration is key in antibody validation efforts. Biotechniques. 2017;63:47. [Google Scholar]
  • 27. Blow N. Fighting for the future of antibodies. Biotechniques. 2017;63:51–56. [DOI] [PubMed] [Google Scholar]
  • 28. O’Kennedy R, Fitzgerald S, Murphy C. Don’t blame it all on antibodies: the need for exhaustive characterisation, appropriate handling, and addressing the issues that affect specificity. TrAC-Trends Anal Chem. 2017;89:53–59. [Google Scholar]
  • 29. Skogs M, Stadler C, Schutten R, et al. Antibody validation in bioimaging applications based on endogenous expression of tagged proteins. J Proteome Res. 2017;16:147–155. [DOI] [PubMed] [Google Scholar]
  • 30. Sjoberg R, Mattsson C, Andersson E, et al. Exploration of high-density protein microarrays for antibody validation and autoimmunity profiling. New Biotech. 2016;33:582–592. [DOI] [PubMed] [Google Scholar]
  • 31. Jager SB, Vaegter CB. Avoiding experimental bias by systematic antibody validation. Neural Regenerat Res. 2016;11:1079–1080. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Freedman LP, Gibson MC, Bradbury AR, et al. The need for improved education and training in research antibody usage and validation practices. Biotechniques. 2016;61:16–18. [DOI] [PubMed] [Google Scholar]
  • 33. Rizner TL, Sasano H, Choi MH, Odermatt A, Adamski J. Recommendations for description and validation of antibodies for research use. J Steroid Biochem Molec Biol. 2016;156:40–42. [DOI] [PubMed] [Google Scholar]
  • 34. Roncador G, Engel P, Maestre L, et al. The European antibody network’s practical guide to finding and validating suitable antibodies for research. MAbs. 2016;8:27–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Bordeaux J, Welsh A, Agarwal S, et al. Antibody validation. Biotechniques. 2010;48:197–209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Reducing our irreproducibility. Nature. 2013;496:398. [Google Scholar]
  • 37. Kadian N, Raju KS, Rashid M, Malik MY, Taneja I, Wahajuddin M. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry. J Pharm Biomed Anal. 2016;126:83–97. [DOI] [PubMed] [Google Scholar]
  • 38. Gonzalez O, Blanco ME, Iriarte G, et al. Bioanalytical chromatographic method validation according to current regulations, with a special focus on the non-well defined parameters limit of quantification, robustness and matrix effect. J Chromatog A. 2014;1353:10–27. [DOI] [PubMed] [Google Scholar]
  • 39. Andreasson U, Perret-Liaudet A, van Waalwijk van Doorn LJ, et al. A practical guide to immunoassay method validation. Front Neurol. 2015;6:179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Cowan K, Gao X, Parab V, et al. Fit-for-purpose biomarker immunoassay qualification and validation: three case studies. Bioanalysis. 2016;8:2329–2340. [DOI] [PubMed] [Google Scholar]
  • 41. DeSilva B, Smith W, Weiner R, et al. Recommendations for the bioanalytical method validation of ligand-binding assays to support pharmacokinetic assessments of macromolecules. Pharm Res. 2003;20:1885–1900. [DOI] [PubMed] [Google Scholar]
  • 42. Lee JW, Devanarayan V, Barrett YC, et al. Fit-for-purpose method development and validation for successful biomarker measurement. Pharm Res. 2006;23:312–328. [DOI] [PubMed] [Google Scholar]
  • 43. Abraham GE. Solid-phase radioimmunoassay of estradiol-17beta. J Clin Endocrinol Metab. 1969;29:866. [DOI] [PubMed] [Google Scholar]
  • 44. Abraham GE. Radioimmunoassay of steroids in biological fluids. J Steroid Biochem Molec Biol. 1975;6:261–270. [DOI] [PubMed] [Google Scholar]
  • 45. Zeck A, Weller MG, Niessner R. Multidimensional biochemical detection of microcystins in liquid chromatography. Anal Chem. 2001;73:5509–5517. [DOI] [PubMed] [Google Scholar]
  • 46. Bahlmann A, Falkenhagen J, Weller MG, Panne U, Schneider RJ. Cetirizine as pH-dependent cross-reactant in a carbamazepine-specific immunoassay. Analyst. 2011;136:1357–1364. [DOI] [PubMed] [Google Scholar]
  • 47. Hesse A, Weller MG. Protein quantification by derivatization-free high-performance liquid chromatography of aromatic amino acids. J Amino Acids. 2016;2016:7374316. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Canziani GA, Klakamp S, Myszka DG. Kinetic screening of antibodies from crude hybridoma samples using biacore. Anal Biochem. 2004;325:301–307. [DOI] [PubMed] [Google Scholar]
  • 49. Friguet B, Chaffotte AF, Djavadi-Ohaniance L, Goldberg ME. Measurements of the true affinity constant in solution of antigen-antibody complexes by enzyme-linked immunosorbent-assay. J Immunol Met. 1985;77:305–319. [DOI] [PubMed] [Google Scholar]
  • 50. Zeck A, Weller MG, Niessner R. Characterization of a monoclonal tNT-antibody by measurement of the cross-reactivities of nitroaromatic compounds. Fresenius J Anal Chem. 1999;364:113–120. [Google Scholar]
  • 51. Winklmair M, Weller MG, Mangler J, Schlosshauer B, Niessner R. Development of a highly sensitive enzyme-immunoassay for the determination of triazine herbicides. Fresenius J Anal Chem. 1997;358:614–622. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

uomf_january_3_2018_checklist_antibody_validation_(1) – Supplemental material for Ten Basic Rules of Antibody Validation

Supplemental material, uomf_january_3_2018_checklist_antibody_validation_(1) for Ten Basic Rules of Antibody Validation by Michael G Weller in Analytical Chemistry Insights

Supplementary material

Articles from Analytical Chemistry Insights are provided here courtesy of SAGE Publications

RESOURCES