Abstract
A novel format was introduced at the recent AAPS NBC Workshop on Method Development, Validation and Troubleshooting in San Diego on 18th May 2014. The workshop format was initiated by Binodh De Silva; Marie Rock and Sherri Dudal joined the initiative to develop and chair the workshop. Questions were solicited by a variety of avenues, including a Linked-In Discussion Group. Once collated and clarified, the topics covered assay development, validation, and analysis of PK, Immunogenicity, and Biomarkers with an additional topic on alternative bioanalytical technologies. A panel of experts (workshop report co-authors) was assigned to each topic to bring forward thought-provoking aspects of each topic. The format of the workshop was developed to target the needs of bioanalytical scientists with intermediate to advanced experience in the field ranging to enable robust discussion and to delve deeper into the current bioanalytical hot topics. While the new format allowed for an interactive session with the topical discussion driven by the audience members, it did not foster equal discussion time for all of the proposed topics, especially Biomarkers and alternative LBA technologies.
KEY WORDS: antibody, bioanalytical, biomarker, ligand-binding assays, immunogenicity
INTRODUCTION
To kick-off the workshop, a live interactive poll was taken of the audience to understand the make-up of the audience and to seek their opinions on selected workshop topics. The majority of the 60 participants worked either for a company or a CRO (84.6%) and about 70% had over 5 years’ experience in LBAs. Interests were spread over the various proposed topics including method validation, draft FDA guidance, assay development, immunogenicity, biomarkers, and LC/MS for large molecules. Most performed method validation under GLP supported by a QA audit (72%). When asked if they felt comfortable with singlet sample analysis for LBA, 41% responded “not in any case”. Others required intra- and inter-QC precision of less than 5–10% or the meeting of multiple criteria in addition to QC precision to be comfortable with singlet analysis. For preclinical antibody assessments, 42% perform screening, confirmation, and titer assays, 14% only screening and confirmation assays, and 12% (with a 99 percentile) or 19% (with a 95 percentile) only screening assays. Although 48% choose their positive control antibody depending on whatever works, 32% use a polyclonal antibody. Multiplexing is used mainly for biomarkers (52%) and most use acceptance criteria that are similar, but looser than those used for PK assays (30–50%) for validating biomarker assays (76%). In terms of what improved technology should be focused on, LBA scientists’ preferences were as follows: 31% automate LBA assays, 14% enhanced sensitivity, 31% enhanced specificity, 14% ability to measure drug in blood samples directly, and 10% for the measurement of drug in serum without disturbing the equilibrium. When asked if they used LCMS/MS for large molecule PK analysis, 51% never used the technology and only 15% used it routinely.
FDA Perspective
Following the electronic survey, Brian Booth (FDA) presented a review and update on the current draft Regulatory Guidance (1). Brian reiterated that the aim of the guidance was not to be prescriptive; instead, scientific rationale should guide the assay development/validation. He identified that the main issues were the requirements pertaining to the following:
The 7% sampling requirement for ISR
The internal standard CoA for LC-MS assays
Reference standard expiration
Cross/partial validation
Biomarkers, ADCs, LC/MS for proteins
The enormous number of comments that were received has now been collated and will be taken into consideration. Brian aimed to have the next version of the draft circulating within the department by August 2014. However, no date can be projected for publication.
Unresolved Issues from Crystal City V
Lauren Stevenson summarized topics discussed at Crystal City V (CCV), first highlighting where consensus was achieved. Topics that had not achieved consensus were presented in greater detail and set the stage for additional discussion. Consensus and clarifications on the following topics had been achieved at CCV: selectivity and specificity assessments (matrix effects, cross-reactive substances and potentially interfering substances), number of runs for accuracy and precision, inclusion of dilutional linearity during validation, number of calibrators required for standard curves, placement of QCs within the assay range, sample analysis practices, and matrix stability assessments. Details were provided in the workshop slide decks and are described in the CCV Conference Report (2).
Complete consensus on validation acceptance criteria for the ULOQ QC was not achieved at CCV. Prior recommendations from both regulatory agencies and industry have set criteria for ULOQ at 25% RE, 25%CV, and 40% total error in alignment with criteria for LLOQ (3–5).
Setting ULOQ criteria to match those deemed acceptable for LLOQ enables utilization of the entire curve and removes the need to artificially curtail the curve at the upper end to meet tighter criteria (20%, RE, 20%CV, 30%TE) proposed in the FDA Draft Guidance. However, it was argued that samples above assay range can always be diluted into range and thereby allow results to be read from the most accurate (middle) portion of the curve. This option is not possible for samples with low levels of analyte whose concentrations must be read at the lower end of the curve (near the LLOQ).
Complete consensus had also not been achieved on the inclusion of additional QCs when study sample concentrations are clustered in a narrow range of the standard curve. Industry position indicated that QCs in LBAs are already generally relatively close to each other, and when sample results cluster around the mid-range of the assay, it is simply an indicator that sample dilutions were well-chosen. There was general agreement that most LBAs (typical assay ranges span about 2 logs in concentration) would not require additional QCs and some agreement that the bulk of sample results should be bracketed by two QCs, although there was no agreement on “how close” the bracketing QCs should be (for assays with larger dynamic ranges). For diluted samples, which comprise the bulk of samples analyzed by LBA, it was noted that adjustments in dilution factors can help ensure that sample results are derived from the entire curve range, but the scientific value of this approach was questioned since for accuracy purposes it is desirable to target the middle of the curve.
Partial validation was discussed at CCV in the Common Topics session with no additional discussion in the LBA session. Discussion did not include definition of specific criteria for the level of validation required to support changes to a previously validated assay as this was deemed to be beyond the scope of the workshop. The industry recommendation was that testing requirements to support transfers and/or changes should use good scientific judgment and be defined in SOPs or study validation plans. In the context of the NBC LBA workshop, it was highlighted that partial validations can range from as little as one accuracy and precision determination to a nearly full validation. Typical bioanalytical method changes that fall into this category include, but are not limited to the following: bioanalytical method transfers between laboratories or analysts, changes in instrument and/or software platforms, reagent lot changes, or changes in sample processing procedures (3, 4). Brief discussion was also devoted to definitions of full validation, partial validation, and cross-validation. Full validation is necessary when developing and implementing a bioanalytical method for the first time for an analyte-of-interest. Partial validations are modifications of validated bioanalytical methods that do not necessarily require full revalidations as in the examples provided above. Cross-validation is a comparison of two bioanalytical methods and necessary when two or more bioanalytical methods are used to generate data within the same study or submission. It was noted that confusion in terms can arise since many labs and industry professionals refer to cross-validation in the context of validating the same bioanalytical method at multiple labs.
Questions that arose to the expert panel after CCV presentation are as follows:
-
Is there risk in bringing the ULOQ down to meet the FDA Draft Guidance acceptance criteria when there are no anchor points and the sigmoidal curve shape may be lost?
Consensus was that anchor points may be necessary when the ULOQ is moved farther down the curve and will depend upon the assay and the curve fit model, noting that sigmoidal curve fits will often require anchor points.
-
Do you include dilution QCs in your sample analysis runs?
This topic arose in the open discussion sessions at CCV. Industry practice seemed to be divided with some laboratories routinely including dilution QCs as reassurance that sample dilutions were performed correctly in a given run. Others, however, see it as only indicating whether the dilution QC was diluted correctly and are not reassured that this indicates that samples were also diluted correctly. Consensus at the NBC LBA workshop was that dilutional linearity assessments should be performed during validation to establish the assay’s ability to accurately measure diluted samples and that dilution QCs in sample analysis runs are not necessary.
-
What is the best practice for use of intermediate stock? Do you need additional stability if make an intermediate standard stock and use it in the future?
It was agreed that stability of the intermediate stock should be demonstrated. Stability assessments should be performed in the assay using a fresh standard curve (prepared with fresh intermediate stock).
-
Does specificity need to be tested for concomitant medications (small molecules)?
Specificity for small molecule concomitant medications is typically not relevant and should only be performed if there is an expectation of interference due to the underlying biology or pathology.
-
Should ADA interference in PK assays be tested in validation?
It is acknowledged that antibodies available for such assessments are imperfect surrogates of antibodies that may arise in vivo. It is also understood that measurements in free PK assay formats are expected to be impacted by the presence of ADA. There are two schools of thought on this topic with some industry professionals believing that this assessment should always be performed and others who believe that evaluation in method development is sufficient to understand the potential extent of interference.
-
Is it acceptable to use the same assay for two different programs?
It is acceptable, but a partial validation for the second therapeutic molecule should be performed. Regarding specificity for different biotherapeutics against the same target, it is better controlled in the context of toxicology studies as it is easier to ensure that animals were not exposed to the initial biotherapeutic.
Topics Relating to the Calibration Curve
Michaela Golob and Viswanath Devanarayan covered questions related to the calibration curve, whose definition is the visual presentation of the relationship between concentration and response for an analyte, most often fitted to a 4PL or 5PL logistic model with weighting in LBAs (ligand binding assays). The discussion started with a summary on recommendations on optimal calibration curve design based on Findlay and Dillard (4) “Appropriate Calibration Curve Fitting in LBAs” where for 4PL model, the following main recommendations are listed:
A minimum of five calibrator concentrations and not more than eight should be used
The calibrators should be prepared and analyzed in duplicate or triplicate
The concentration progression should be evenly spaced on a logarithmic scale, typically of the power of 2 or 3
The midpoint concentration of the calibrators should be somewhat greater than the IC 50
Anchor concentrations outside the expected validated range should be considered for inclusion to optimize the fit
From a curve fitting point of view, the use of five to eight calibrators ensures that there are enough calibrators to adequately estimate all four parameters in the model and still allow for an assessment of the fit quality (“goodness-of-fit”). From a regulatory point of view, a minimum of six valid calibrators are expected within the assay’s validated range. International guidance’s confirm standards should be run at least in duplicates and be spaced evenly within the anticipated range. Anchoring points are generally useful and highly recommended to improve the quality of the curve fit. Anchoring points are outside of the quantification range and are not considered for acceptance criteria. Target acceptance criteria (% bias) for standards are given by agencies as 20% within nominal range; 25% for LLOQ and *ULOQ (US, 20%) for at least 75% of calibration standards.
The question was posed as follows: is it allowed to use more than 8 standards?
The answer was yes; however, very little further information on curve fitting is gained and that should be weighed against the loss of plate capacity.
The difference between 4PL and 5PL models was addressed. The 4PL model assumes that the top and bottom part of the sigmoidal curve are symmetric, i.e., they are mirror image of each other. The fifth parameter in the 5PL model, called the asymmetry parameter, helps relax this assumption and thus enables fitting curves that are not perfectly sigmoidal. Thus, the 5PL model does not assume symmetry in the concentration-response relationship and, therefore, can accommodate a wider variety of calibration curves, including those that do not have a well-defined upper plateau. From our experience, as the performance of calibration curves fit with the 5PL model has been at least as good as the 4PL, it would be appropriate to consider 5PL as a default choice for fitting calibration curves. Six or more calibrators are recommended for routine application of the 5PL model, as it statistically has a larger number of degrees of freedom than the 4PL model.
The next set of questions related to what weighting a calibration curve means and how to determine the appropriate weighting factor were addressed.
Most curve-fitting programs assume by default that the variance of assay signal is same throughout the entire range of the curve. However, calibration curve data from most assay formats, especially immunoassays, are such that the variance of the assay signal increases proportionally with the signal. In such cases, variance of the data in the curve-fitting model such as 4PL or 5PL should be weighted in proportion to the assay signal values. This is often referred to as weighting the calibration curves. The typical options in laboratory software for weighting are 1/Y, 1/Y2, etc. A simple way to determine the appropriate weighting factors is to first plot the logarithm of standard deviation of replicates of each calibrator versus the logarithm of the mean assay signal. Then, fit a linear model to this data and estimate the slope. Repeat this for at least six calibration curves, and average the slope estimates. If the average slope estimate is 1, the weighting factor 1/Y should be used. Similarly, slope of 0.5 corresponds to 1/, and 2 corresponds to 1/Y2
Question: Is the linear portion of the sigmoidal calibration curve the best part of the assay?
This would be true if the variance of the assay signal is the same across the assay signal. As described above, this is not the case for most immunoassays as the variance increases in proportion to the signal. Therefore, the best part of the assay can start from the nonlinear portion of the calibration curve (lower quantification limit) and can end much before the linear portion ends (upper quantification limit). This is illustrated and addressed in detail in Lee et al. (2006). Weighting the calibration curve results in the lower quantification limit to be much lower and hence the assay to be much more sensitive. This improvement is typically 3–10 from our experience. The upper quantification limit also gets shifted to the left.
Question: Can the blank samples be used for calibration curve fitting or should it be used for subtracting the assay signal from calibrator samples?
If the calibration curve will be loaded onto every assay plate and run, then subtracting the calibrator samples by the blank samples does not offer much mathematical benefit. Instead, it would be more useful to use such a blank sample as a zero concentration sample for calibration curve fitting. This can improve the performance of the calibration curve, and this was illustrated in the workshop using real data by comparing the accuracy and precision of quality control samples from calibration curves that were fit with or without the blank samples.
At the Crystal City V meeting, a lot of discussion was conducted around what “fresh” calibration standard and QC means. The definition ranged from prepared the day of use to a preparation that has been frozen overnight to accommodate the conduct of analysis.
The main opinion of the panel was that “fresh” means “not/never frozen”. Discussion on the use of new definitions like “fresh-fresh” or “fresh-frozen” are considered not useful since it is open to interpretation. A “freshly prepared” sample refers to a preparation in biological matrix where the drug is freshly spiked “on the day of use”. Frozen, independent of “how long”, means that the drug is exposed to temperature fluctuations upon freezing and thawing that may potentially affect the molecular structure or binding equilibrium and alter the recovery of the drug. In cases where frozen standards and frozen QCs prepared at the same time are used in the same assay, it is important to note that changes due to temperature shifts will not be observed based on parallel trending by calibrators and QCs during sample analysis.
Regarding calibration standards, there are different processes established in different companies. Both, frozen and freshly prepared calibrators, are used for sample analysis across industry. When frozen pre-made standards in matrix are used, an appropriate assessment of stability is required.
However, there is agreement in the bioanalytical community on the need of frozen QC samples as they must mimic unknown samples that are typically frozen. The discussion came up at the CCV meeting where the meaning of “fresh QCs” regarding stability, as mentioned in the FDA draft guidance, was explained by agency representative as “frozen for a short period of time”. In the workshop, there was agreement that these samples cannot be called “fresh,” but should be defined as “frozen i.e. overnight, …for 24 hrs or …for not more than “x” days,” where “x” maybe a short period of 3-5 days based on company upfront-definition and supporting validation data. Nevertheless, QC samples are frozen.
Free Drug Quantification
Roland Staack discussed the determination of correct free drug concentrations that can be highly important in the evaluation of pharmacokinetics for new drug candidates which bind a soluble ligand or if a soluble/shed form of the membrane-bound target is present in the circulation (6).
In the PK session, the major bioanalytical challenges which might result in incorrect free drug concentrations (sample preparation, assay procedure and applied calibration concept) as well as the recently proposed “free analyte QC concept” as a tool to develop and qualify/validated “free analyte” assays were discussed.
Simple sample dilution, which is typically the only sample preparation for ligand binding assays, can already significantly impact the correct quantification of free drug concentrations (7). Sample dilution using a ligand-free matrix/buffer induces a dissociation of the non-covalent drug-ligand complexes resulting in an increase of the free drug concentration. Conversely, if the dilution matrix contains a certain amount of the ligand, additional ligand is added to the sample by each dilution step resulting in the formation of new drug-ligand complexes. This dilution procedure results in decreasing free drug concentrations and finally to a normalization of the free drug fraction. Depending on the assay range, this normalization of the free drug fraction (at equilibrium) bears the risk of misinterpretation of parallelism experiments, since the determined “normalized” free drug concentrations might comply with the acceptance criteria for parallelism testing but do not reflect the actual free drug concentration in the sample. Consequently, successful parallelism experiments do not unequivocally prove the correctness of a free-drug assay result. In addition to the dilution factor, time is most critical and determines the extent of the dilution induced error, which is dependent on the binding kinetics between drug and ligand/binding partner, i.e., how long it takes until the new equilibrium between drug and ligand is reached. Consequently, during assay development, the assay procedure (selection of dilution matrix, timing of sample dilution and incubation time of the sample during the capturing step) needs to be optimized to control these effects.
The recently proposed “free analyte QC concept” (8) may be a useful approach for free drug assay development as well as assay qualification/validation. This approach is based on the generation of QC samples with a defined free drug concentration in equilibrium with drug ligand complexes and thus enables monitoring of potential interferences in the equilibrium on a quantitative basis. Such “free analyte QCs” enable specific development of free drug assays and can be used for assay qualification/validation. The QC samples are prepared by mixing defined amounts of drug and ligand in a ligand-free matrix; after equilibration of the interaction, the free drug concentration is calculated based on the binding affinity (KD), and this value is set as a target value. Variation of the drug-ligand ratio enables generation of QC samples with varying free drug concentrations covering the calibration range of the assay. Ideally, purified endogenous ligand would be available for the preparation of the free analyte QC samples. However, in many cases, purified ligand is unfortunately not available. Consequently, rDNA produced material represents the best surrogate, as is done during clone selection or characterization of binding kinetics. Another opportunity might be to use samples with known endogenous ligand concentrations to prepare the free analyte QCs (8).
A perfect “free-drug” assay, however, might still provide incorrect free drug data, if an inappropriate calibration concept is applied. Typically, the calibrators are prepared in the same biological matrix as the sample in the intended study. Consequently, the “blank” matrix (with respect to drug) might contain a certain concentration of the ligand, which could introduce a systematical error in the drug quantification if a “free-drug” assay is applied. The error is due to the construction of the calibration curve when the spiked nominal drug concentration is plotted against the assay signal, instead of the (unknown) free drug concentration which actually generated the signal (6). Correct quantification of free drug concentrations requires a ligand-free calibration matrix. Given the differing calibration strategies required for free drug assays, as compared to total drug assays, it was discussed that it would be beneficial to understand the basic differences when developing and validating “free-drug’ assays.
Immunogenicity
The main point from the discussion on immunogenicity was the need to perform an upfront risk assessment since it provides a road map to guide the immunogenicity program. In addition, it will delineate how often to collect samples and it defines how quickly analysis should be performed. Dr. Pedras-Vasconcelos of the FDA reminded the participants that a multidisciplinary analysis team is the key to success and that integration of immunogenicity with clinical endpoints and patient safety can be very important. He also recommended to always bank the samples since we never know when FDA will ask us to go back to them.
Recommendations and comments from the panelists and some attendees are listed below:
For non-clinical program support, some companies only do screening and skip the confirmatory and titer determinations.
By phase III, FDA wants a fully validated assay. Some do not validate before phase I, they analyze the samples and determine if there are any issues. If redevelopment is needed, they reanalyze.
- Use all validation runs to establish the criteria for QCs, not just the cutpoint runs.
- Outliers should be excluded prior to cut-point evaluation. Identify and exclude the plate outliers (typically these are “analytical” outliers) from each assay plate separately. This can be accomplished via Tukey’s outlier box-plot. As the extreme high outlier samples can mask the detection of lower outlier samples and these are not independent, this process should be iterated until there are no plate outliers. After the plate outliers are excluded, the subject outliers should be identified and excluded. This can be done using Tukey’s outlier box-plots on the subject means, and iterating this until there are no more outliers. This iteration process is important if mean and standard deviation (SD) are used in the cut-point evaluation formula. If robust alternatives such as the median and median absolution deviation, respectively, or Tukey’s biweight method are used, then this iteration process often does not result in substantial difference in cut-point results, and can therefore not be as important.
It is important to note that some “outliers” may be true positives (after evaluation with the confirmatory assay). These should be removed prior to outlier analysis, which is meant for removing samples with high signals that skew the population distribution. If the cut-point evaluation was done in a different disease population or if there is reason to believe that they may not represent the data from pre-dose baseline samples from a clinical trial, objective criteria should be applied to determine whether the same cut-point can be used. This can be done by comparing the distribution of the validation sample cohort used for cut-point evaluation to the new sample cohort. If the variability is significantly different between these cohorts (Levene’s test p value < 0.05), then the same cut-point should not be used. If the variability is not significantly different, and the means are also similar, then the same cut-point can be used. If only the means are significantly different, then one option would be to create a negative control pool based on the new cohort, use the cut-point correction factor from the previous population (validation cohort), and apply it to the new negative control during the in-study sample testing phase. Another option would be to derive a new cut-point correction factor based on the new cohort and use it with the old negative control
Consider life cycle maintenance for immunogenicity assays, as every proposed novel clinical indication will require immunogenicity testing, and possibly assay requalification.
Alternative Technologies that Support All Bioanalytical Assays
Rand Jenkins provided an overview of protein bioanalysis by LC-MS and mentioned some highlights from an industry white paper, “Recommendations for Validation of LC-MS/MS Bioanalytical Methods for Protein Biotherapeutics,” which has recently been published in the AAPS Journal (9). Currently, most LC-MS protein quantification methods involve isolation or enrichment of the target analyte from the biological matrix, such as plasma or serum, followed by proteolytic digestion to produce a mixture of characteristic peptides. Those that are unique to the analyte (often termed signature peptides) in the context of the biological sample are then measured as surrogates for the intact protein.
An example was discussed where this approach offers several practical advantages: analysis of total mAb in nonclinical species using a “universal” LC-MS/MS assay design. Most mAb therapeutics are humanized or fully human antibodies, which share a common light/heavy chain framework structure for a given isotype. By selecting human-specific peptide sequences (not present in the nonclinical species), a single method can be applied for most mAbs of the same isotype in multiple nonclinical species. Protein A/G or anti-human Fc are typically used as generic affinity capture reagents, and no analyte-specific critical reagents are required, which is a significant benefit for early stage work. This concept is also being extended to total anti-body drug (ADC) analysis in nonclinical studies.
A question was submitted at the workshop regarding the stability of ADCs. An LC-MS-based approach to evaluating ADC stability in terms of drug-antibody ratio (DAR) changes was discussed. ADCs are known to undergo in vitro and in vivo de-conjugation and/or differential elimination based on their drug loading. The MS-based technique involves anti-idiotypic Ab or target immunoaffinity capture of the ADCs from a biological sample, followed by enzymatic de-glycosylation and LC-high resolution MS analysis of the intact molecules. Using software to de-convolute the complex full spectrum MS data, the molecular weights of the different species can be calculated and the proportion of the molecules with different DARs determined. Changes in DAR with time and exposure are useful to characterize both ADC stability and PK behavior.
Feedback from the FDA attendee Dr. Pedras-Vasconcelos was that the agency is very open to the introduction of new analytical technologies. However, sponsors should reach out and educate the agency about the chosen new technologies, e.g., LC-MS or SPR. There are two types of forums which sponsors can request: case specific (confidential) or open forum (scientific or criteria setting). Orthogonal assessments are acceptable, but in some cases, not possible, so appropriate justification to the agency would be expected.
CONCLUSIONS
The AAPS NBC Workshop on Method Development, Validation, and Troubleshooting of ligand-binding assays enabled an interactive session where the scientists shared their experiences in quantitative assays, immunogenicity assays, and biomarker assays. While there was not sufficient time to discuss new technology platforms and ADCs, the continuation of this dialog will continue in the future. These such interactions between the industry, academic, and regulatory scientists will strengthen the understanding of the best technology and methodology to support biotherapeutics for the ultimate goal of patient benefit.
Acknowledgments
The authors would like to acknowledge the detailed record keeping of Dr. Johanna Mora (BMS) and Dr. Theingy Thway (Amgen). This workshop would not have been successful without the support of AAPS staff.
References
- 1.FDA, US Department of Health and Human Services. Draft guidance for industry: Bioanalytical Method Validation (Revised). [Online] September 2013. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM368107.pdf.
- 2.Booth B, Arnold ME, DeSilva B, Amaravadi L, Dudal S, Fluhler E, Gorovits B, Haidar SH, Kadavil J, Lowes S, Nicholson R, Rock M, Skelly M, Stevenson L, Subramaniam S, Weiner R, Woolf E. Workshop Report: Crystal City V-Quantitative Bioanalytical Method Validation and Implementation: The 2013 Revised FDA Guidance. AAPS J. 2014 Dec 31. [DOI] [PMC free article] [PubMed]
- 3.DeSilva B, Smith W, et al. Recommendations for the bioanalytical method validation of ligand-binding assays to support pharmacokinetic assessments of macromolecules. Pharm Res. 2003;20:1885–1900. doi: 10.1023/B:PHAM.0000003390.51761.3d. [DOI] [PubMed] [Google Scholar]
- 4.Kelley M, DeSilva B. Key elements of bioanalytical method validation for macromolecules. AAPS J. 2007;9:E156–E163. doi: 10.1208/aapsj0902017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Findlay JW, Dillard RF. Appropriate calibration curve fitting in ligand binding assays. AAPS J. 2007;9:E260–E267. doi: 10.1208/aapsj0902029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Lee JW, et al. Bioanalytical approaches to quantify “total” and “free” therapeutic antibodies and their targets: technical challenges and PK/PD applications over the course of drug development. AAPS J. 2011;13:99–110. doi: 10.1208/s12248-011-9251-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Staack RF, Gregor J, Julia H. Mathematical simulations for bioanalytical assay development: the un-necessity and impossibility of free drug quantification. Bioanalysis. 2012;4(4):381–395. doi: 10.4155/bio.11.321. [DOI] [PubMed] [Google Scholar]
- 8.Staack RF, Gregor J, Uwe D, Julia H. Free analyte QC concept: a novel approach to prove correct quantification of free therapeutic protein drug/biomarker concentrations. Bioanalysis. 2014;6(4):485–496. doi: 10.4155/bio.13.316. [DOI] [PubMed] [Google Scholar]
- 9.Jenkins R, et al. Recommendations for validation of LC-MS/MS bioanalytical methods for protein biotherapeutics. AAPS J. 2015;17:1–16. doi: 10.1208/s12248-014-9685-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
