Skip to main content
ACS AuthorChoice logoLink to ACS AuthorChoice
. 2024 Jan 8;96(3):966–979. doi: 10.1021/acs.analchem.3c03708

Ongoing Analytical Procedure Performance Verification Using a Risk-Based Approach to Determine Performance Monitoring Requirements

Phil J Borman , Amanda M Guiraldelli †,*, Jane Weitzel , Sarah Thompson , Joachim Ermer , Jean-Marc Roussel , Jaime Marach , Stephanie Sproule , Horacio N Pappa
PMCID: PMC10809227  PMID: 38191128

Abstract

graphic file with name ac3c03708_0008.jpg

The analytical procedure life cycle (APLC) provides a holistic framework to ensure analytical procedure fitness for purpose. USP’s general chapter <1220> considers the validation activities that take place across the entire analytical procedure lifecycle and provides a three-stage framework for its implementation. Performing ongoing analytical procedure performance verification (OPPV) (stage 3) ensures that the procedure remains in a state of control across its lifecycle of use post validation (qualification) and involves an ongoing program to collect and analyze data that relate to the performance of the procedure. Knowledge generated during stages 1 (procedure design) and 2 (procedure performance qualification) is used as the basis for the design of the routine monitoring plan to support performance verification (stage 3). The extent of the routine monitoring required should be defined based on risk assessment, considering the complexity of the procedure, its intended purpose, and knowledge about process/procedure variability. The analytical target profile (ATP) can be used to provide or guide the establishment of acceptance criteria used to verify the procedure performance during routine use (e.g., through a system/sample suitability test (SST) or verification criteria applicable to procedure changes or transfers). An ATP however is not essentially required to perform OPPV, and a procedure performance monitoring program can be implemented even if the full APLC framework has not been applied. In these situations, verification criteria can be derived from existing validation or system suitability criteria. Elements of the life cycle approach can also be applied retrospectively if deemed useful.

Introduction

Robust and reliable analytical procedures are required across many manufacturing industries, including the pharmaceutical, fine and specialty chemical, food, and petrochemical industries. These industries rely on fit-for-purpose analytical procedures over many years to ensure that routinely manufactured products are of high quality. Many of these industries use ISO based accreditation or certification to ensure that the reportable values are fit-for-purpose. For example, ISO/IEC 17025 explicitly includes the fit-for-purpose requirement. Analytical procedure failures can result in a delay or inability to deliver products to customers or, worse, lead to unacceptable products being released due to reportable results incorrectly appearing to be within specification. In the pharmaceutical industry, this can have severe consequences such as being unable to deliver critical medicines to patients. The ATP as described by Jackson et al. and others19 can be established to summarize the performance requirements associated with a measurement on a quality attribute (or multiple attributes), which need to be met by an analytical procedure. The ATP can be used to define and assess the fitness of an analytical procedure during the development phase as well as to help define the validation (or qualification) criteria of the developed procedure. Analytical procedures used to test pharmaceutical products are typically validated in accordance with the International Council for Harmonization (ICH) Q2(R1) guideline10 or USP <1225>.11 Validation, however, is often treated as a one-off event,12 with little consideration given to verifying how well the procedure will perform in everyday, “real world” operating conditions. Regulators and industry frequently use ICH Q2(R1)10 or USP <1225>11 in a “check box” manner without considering the intent of these guidance documents, or the philosophy of analytical procedure validation.13 Performing OPPV ensures that the procedure remains in a state of control across its lifecycle of use post validation (qualification). This should involve an ongoing program to collect and analyze data related to the procedure performance. The importance of this continued verification is recognized by its inclusion in the ISO standards, such as the ISO/IEC 17025 requirements for ensuring the validity of results. The analytical procedure life cycle (APLC) provides a holistic framework to ensure analytical procedure fitness for purpose. USP’s general chapter <1220> entitled Analytical Procedure Life Cycle5 (official as of May 1, 2022 in USP-NF), describes a three-stage approach similar to FDA’s lifecycle-based process validation guidance14 and highlights the interconnectivity of all stages and activities that are intended to demonstrate procedure fitness for use, allowing for continuous risk and knowledge management. The APLC framework consists of the ATP and three stages, procedure design (stage 1), procedure performance qualification (stage 2), and ongoing procedure performance verification (stage 3).5,9 The enablers of this approach, which are considered enhanced from the traditional approach, are knowledge of quality risk management (QRM) and application of sound scientific approaches. This allows for in-depth knowledge acquisition of the procedure. The analytical procedure life cycle approach is consistent with quality by design (QbD) concepts described in ICH Q guidelines (Q8–13).1520 The new draft ICH Q1421 is the first to address QbD principles applied to procedure lifecycle management and is similar to stage 1 while covering part of stage 3 according to USP <1220>. Q2(R2) can be seen, at least in part, as similar to stage 2 described in USP <1220>. ICH Q14 states that “in the enhanced approach the control strategy is derived from systematic and risk-based evaluation of analytical procedure parameters and an understanding of potential impact on analytical procedure performance. The analytical procedure parameters that need to be controlled to ensure the performance requirements described in the ATP are met and their acceptable ranges should be described in the procedure”.21 OPPV provides data and information about the procedure performance over time and involves monitoring the analytical procedure during routine use, thus providing ongoing confidence that the reportable values generated are fit-for-purpose.22 In addition, ongoing verification can be used to assess the impact of changes to analytical conditions, ensuring the modified procedure will produce reportable values that meet performance criteria that can be defined in an ATP and are fit-for-purpose.5,8,9,22

It can also serve as a preventive measure by providing an early indication of potential performance issues or adverse trends, aiding in the identification of potential changes for the analytical procedure. The overall knowledge generated during stages 1 and 2 is used as a basis for the design of the routine monitoring plan to support performance verification. The extent of the routine monitoring required should be defined based on risk assessment, considering the complexity of the procedure, its intended purpose, and knowledge about process/procedure variability.22,23 Risk assessment will allow for the identification of analytical procedures that require performance monitoring and the extension of monitoring, facilitating the selection of suitable indicators. Steps for the design of the procedure performance monitoring will usually include:

• Assessment of risk associated with the analytical procedure

• Selection of appropriate analytical procedure performance indicators and attributes (based on risks identified during stages 1 and 2)

• Establishment of data sampling strategy and definition of assessment frequency

• Establishment of data processing and analysis methodology (e.g., data analysis and visualization such as trending charts, control charts, and other Statistical Analytical Procedure Performance Monitoring and Control (SAPPMC) techniques; data analysis methodologies using statistical tools)

• Establishment of suitable performance monitoring and controlling rules (limits/acceptance criteria)

• Definition of a reaction plan to guide actions (e.g., need for additional training, SOP review/creation, updating existing analytical control strategy, trending/analyzing additional performance indicators not previously monitored

The procedure performance monitoring may include the definition of the strategy to assess rates of failures and the root cause for invalid tests.

In stage 3, the ATP can be used to provide or guide the establishment of acceptance criteria used to verify the procedure performance, assess the risks, and determine the probability of success and failure. If an ATP has previously been established, then performance criteria can be verified directly (e.g., through precision and accuracy of a quality control sample with the same replication strategy as for the reportable value) or indirectly through ensuring that key analytical procedure attributes (e.g., precision levels, chromatographic resolution between peaks, tailing factor of peaks, etc.) linked to the ATP are within appropriate limits.22 OPPV can still be performed even if an ATP has not been previously defined, and elements of the life cycle approach can also be applied retrospectively if deemed useful. In these situations, verification criteria can be derived from existing validation or system suitability criteria. However, retrospective establishment of an ATP is recommended to have a clear alignment with the requirements. For low-risk procedures, verification may simply involve monitoring of atypical results and/or system suitability failures.6

This paper focuses on describing the importance of the OPPV in the APLC framework to ensure the procedure is fit for use. The following sections cover strategies that support the design of the performance monitoring plan along with the different elements that should be contained in the monitoring program.

Identification of Analytical Procedures Which Require Performance Monitoring through Risk Assessment

The performance of all analytical procedures used in the product control strategy (quality control testing) should be reviewed across the lifecycle of the product to ensure they remain fit for their intended purpose.5,10,23 This review should include routine monitoring (for prioritized procedures) as well as the evaluation of changes made to procedures.23 The extent of the routine monitoring required can be defined using a risk-based approach. When designing a risk assessment for an analytical procedure, it is important to consider which measures of analytical procedure performance or risk are readily available as inputs to the assessment. Two risk assessment approaches are described below.

Simple Risk Assessment Approach

A simple risk assessment approach may focus only on the type and complexity of the analytical procedure and its intended use, enabling the establishment of a platform for risk assessment according to the analytical procedure (technique) type. Analytical procedures that do not form a critical part of the product control strategy can be treated as low risk by default. Ermer et al.23 proposed the following strategy for risk categorization:

• Low risk: Qualitative and semiquantitative tests (e.g., visual tests, simple standard tests, limit tests)

• Medium risk: Medium risk quantitative tests, less complex procedures (e.g., compendial standard tests, water determination, residual solvents, simple assay techniques (UV or chemical titration), kinetics of viral inactivation on cells, total protein content)

• High risk: High risk quantitative tests, assays, and more complex procedures (e.g., assay and related substances by liquid chromatography, bioassays, in vivo potency assays)

Data-Driven Risk Assessment Approach

This may include an assessment of the current performance of the manufacturing process (which includes variability from the product and the analytical procedure) through an assessment of process performance (Ppk) if enough representative batches of the manufacturing process (and analytical procedure) are available. A more involved risk assessment approach may include an assessment of the precision of the specific analytical procedure relative to the product specification or ATP. Precision-to-tolerance ratio (P/TOL)24 and Z score25 are examples of analytical procedure capability metrics that can be calculated to aid this assessment. Conformity and validity rates and/or procedure robustness and reproducibility10,26 can also be taken into account. Where procedures have been shown to be sensitive to changes (e.g., analyst, equipment, environment, reagent lots, or small changes in procedure parameters), this presents a higher risk. Figure 1 depicts sources of variability associated with the process performance index (Ppk) and procedure capability metrics (such as P/TOL and Z-score).

Figure 1.

Figure 1

Different precision levels and a representation of the total procedure and process variability.

Assessment of Manufacturing Process Performance through Ppk Assessment

Process performance index “Ppk” (eq 1) can be used as a worst-case surrogate measure for procedure precision. It represents the combination of manufacturing process variability and analytical procedure variability.

graphic file with name ac3c03708_m001.jpg 1

where USL and LSL are the upper and lower specification limits, μ is the mean and σ is the overall standard deviation. If a drug product has good overall process performance for a particular critical quality attribute (CQA), i.e., the test results show little variation versus the specification, then it can be assumed that the analytical procedure precision is good.

Assessment of Analytical Procedure Performance through PTOL or Z-Score

The calculation of PTOL or Z-Score metrics (described below) involves the use of the analytical procedure standard deviation and deliberately excludes the manufacturing process variability. It is important to consider this context when defining thresholds for high, medium, and low risk procedures, as per Table 1. If it is preferred to incorporate an estimate of manufacturing process variability into the calculations, then the process mean can be substituted for maxima or minima of the process range, either based on an adequate data set or on estimation (e.g., the process range is assumed to cover half of the specification range).

Table 1. Data-Driven Risk Assessment for the Identification of High-Risk Analytical Procedures.

graphic file with name ac3c03708_0006.jpg

Precision to tolerance ratio (PTOL) is described by Chatfield and Borman24 (eq 2).

graphic file with name ac3c03708_m002.jpg 2

where σa is the analytical procedure standard deviation, USL is the upper specification limit, and LSL is the lower specification limit). Low PTOL values are desirable.24

The Z-score25 (eq 3) is defined as the number of procedure standard deviations between the mean batch result and the nearest specification limit.

graphic file with name ac3c03708_m003.jpg 3

where SL is the nearest specification limit and σa is the standard deviation of the analytical procedure. The analytical procedure standard deviation (σa) can be derived from a combination of precision studies such as repeatability, intermediate precision, ruggedness studies,23 and data generated from routine use of the procedure. High absolute value Z scores are desirable.

A data-driven risk assessment that takes into account process and analytical procedure capability is shown in Table 1. When performing a risk assessment, analysts should be aware that it is possible that a low-risk procedure could become a high-risk procedure during a subsequent periodic risk assessment, due to changes in manufacturing process which impact mean batch results, or changes to specification limits. There may not have been a change in the procedure precision (σa). However, the level of risk associated with the procedure is rated highly due to the proximity of the batch data to the specification limit(s), and therefore consideration of procedure performance monitoring is recommended to ensure that a drift in procedure performance does not further increase the risk of out of specification batch results. In such cases, it may not be possible or practical to improve procedure performance in order to reduce the risk of out of specification batch results. It may be more appropriate to consider manufacturing process improvements or review of the specification.

Risk-Based Performance Monitoring Plans for Analytical Procedures

Analytical Procedure Performance Indicators

Different analytical procedure performance indicators can be monitored and used to obtain information about the analytical performance on a continuous basis. Ermer et al.23 proposed the classification of performance indicators into the following classes: (1) conformity indicators, (2) validity indicators, and (3) analytical procedure performance attributes and parameters.

Conformity Indicators

Conformity indicators are the number of out-of-specification (OOS) results with an analytical procedure root cause in a given time period. The number and types of analytical procedure errors will provide information about the reliability of the analytical procedure. In the case of comparison to established acceptance limits, the violation of such limits indicates performance problems. Reportable values outside of the specification limits may also be caused by manufacturing root causes, but analytical root causes (laboratory error) may reveal the poor performance of the analytical procedure. The OOS-procedure is a basic GMP requirement and must be generally established. Thus, this information is readily available and may be classified as “conformity”.23

Validity Indicators

Validity indicators are usually the number of invalid test results due to failures of system or sample suitability criteria (usually established during stages 1 and 2 of the lifecycle approach based on risk assessment) in a given time period or termination of the analysis due to other reasons (e.g., not yielding the reportable value). Violation of analytical procedure acceptance criteria established for validity indicators (e.g., system suitability limits (SST) or sample suitability assessment (SSA)21) is also straightforward information and will result in an invalidation of the whole analysis and may be classified as “validity”.23 The conformity and validity categories are of a qualitative nature, and graphical presentation or control charts are not necessarily needed for monitoring performance. Both categories may be evaluated as a total number during the evaluation period or as a rate with respect to the total number of analyses, depending on the latter.

Analytical Procedure Performance Attributes or Parameters

These measurements, collected during testing, could be critical analytical procedure attributes or performance parameters specified in the analytical procedure with well-defined acceptance criteria or ranges, respectively, or procedure attributes and performance parameters selected due to a potential link with procedure performance. Such numerical data lend themselves well to routine monitoring by means of control charts. However, if the attributes are primarily used for adjustment purposes, then they are not well suited for the monitoring program of the analytical procedure. Whenever replication is performed, the precision at the respective level can be monitored. Examples include the following:

• The range (i.e., the difference between minimum and maximum result) or RSD of replicate standard or sample injections of the same preparation solution will provide information on the system variability.

• The range or RSD of independent replicates of standard or sample preparations.

• Replicate series or runs (common for bioassays) will provide information on the (intermediate) factors varied between the runs (usually calibration as a major contribution).

Analyzing a material which is representative for the sample in question (control sample, QC check) will address not only precision but also (long-term) accuracy, as a shift from the target value (average) can be evaluated (bias). If replication is performed, all precision levels can be covered. Where the same replication strategy is used27,28 as for the generation of the reportable value,23 the precision of the reportable value can be monitored and compared directly to the ATP requirement. Other examples are given in Table 6. Numerical analytical procedure attributes may also include resolution, peak symmetry, signal-to-noise ratio, root-mean-square error of regression, coefficient of correlation/determination of regression, etc. As indicated by the definition, SST related attributes are often dominated by the equipment used, and the data may vary between instruments. Consequently, monitoring and evaluation could become difficult depending on the number of instruments used. In such cases, monitoring can be performed for individual or selected instruments (as far as the number of data are sufficient e.g., more than 20 results per year). Monitoring of such SST attributes will only provide useful information in case of a substantial contribution to the overall performance. However, monitoring of equipment performance is a benefit on its own and may be established as an ongoing instrument performance qualification program.3 Trending of such instrument/procedure attributes may also facilitate detection of adverse trends or an appropriate time point to take an action to avoid procedure performance issues (e.g., change of chromatographic columns, instrument wearable parts, etc.)

Table 6. Examples Show Average Precisions.

attributes/parameters type of sample/data precision level (variances)
range, RSD from replicate measurements (same preparation)a,c sample preparation measurement (injection precision/variance)
reference standard preparation
range, RSD from replicates (independent preparations)a,b sample preparation repeatability (sample/RS preparation variance)
reference standard preparation
relative root-mean-square error of regression RMSEa,c calibration solutions variability of calibration
single results range, RSD from replicatesa,b,d control samplef intermediate precision/reproducibility, (between-series variance)
result control samplef precision (variance of the reportable value)
range, RSD from replicates within storage intervalsa,b stability data (sample, reference standard) injection precision or repeatability
all storage intervalse stability data (sample) intermediate precision/reproducibility
a

Calculation from duplicate range or from variance (n > 2).

b

Variance component analysis.

c

It may be instrument-dependent.

d

Intermediate precision, reproducibility constitute the precision of a single reportable value.

e

Calculation from RMSE of linear regression. See ref (34).

f

Same replication strategy as used for sample preparation.

Identification of Procedure Performance Indicators Based on Risk Assessment

The extent of routine monitoring required can be defined using a risk-based approach. For low-risk analytical procedures, monitoring of conformity indicators is usually sufficient, whereas for high-risk procedures additional monitoring of validity indicators and monitoring of informative analytical procedure attributes and/or performance parameters can provide early warning of potential changes in procedure performance. Table 2 describes applicable performance indicators for procedure risk categories. Once any high-risk procedures have been identified, a useful way to identify the most value adding procedure attributes and/or performance parameters to monitor is to perform a procedure performance cause and effect review (PPC&ER). This detailed review helps to develop an analytical team understanding of the likely root causes of any high risks identified from the risk assessment. With an understanding of the likely root causes, potential improvement ideas and tailored procedure performance indicators can be documented for further consideration (and potentially taken forward for inclusion in the monitoring plan).

Table 2. Performance Indicators for the Different Procedure Risk Categories (Adapted from Reference (23)).

performance indicator applicable for procedure risk-categories
conformity low-, middle-, and high-risk procedures
validity middle-a and high-risk procedures
analytical procedure performance attributes/parameters middle-a and high-risk procedures
a

If considered a priority procedure based on the wider product/use context.

The format of PPC&ER meeting(s) and any reports generated will depend on the nature of the risks flagged in the risk assessment. Both USP <1220> and ICH Q14 describe the use of Ishikawa or fishbone diagrams for risk identification. These can be useful tools for collective brainstorming of potential sources of variability associated with a procedure. Gemba29 is very important for this activity, and the analysts that are most familiar with the procedures should be included in the discussion. It can also be useful to include experienced analytical scientists who are not familiar with the particular procedure under assessment and/or technique subject matter experts. Augmented reality technology can be a great way to orient all meeting participants with the procedure under assessment and to demonstrate any particularly problematic steps. It is not practical to monitor all potential sources of variability (e.g., there may be digital challenges), so it is important to prioritize the most value adding procedure performance indicators for the monitoring plan. Heat maps or failure mode effects and criticality analysis (FMECA)30 are great tools for prioritization of the sources of variability identified in the brainstorm. Once sufficient analytical procedure platform knowledge has been acquired, creation of generic Ishikawa diagrams, process maps, or FMECA can prove useful starting points and serve to accelerate this step. A PPC&ER can be performed for new high-risk procedures in stage 1 (as part of procedure development, in support of procedure ruggedness and robustness studies and/or design of the analytical procedure control strategy). If this is the case, then it is prudent to develop the first version of the monitoring plan at the time of analytical procedure control strategy development as the analytical procedure control strategy and monitoring plan should be complementary. For established (e.g., commercial) analytical procedures where stage 1 was not followed or documented, a new PPC&ER can be a good first step toward the design of a monitoring plan. Two examples are provided below.

Example 1. The PPC&ER highlighted that a chromatographic procedure for the determination of impurities has challenges with sensitivity. In this case, a good procedure performance indicator to prioritize for monitoring in a control chart would be the signal-to-noise of a sensitivity standard analyzed on each test occasion. If the control chart can be filtered by instrument ID, then this can provide even further insights.

Example 2. If a bioassay or similarly complex assay procedure is known to have complicated and variable sample preparation steps, then a good procedure performance indicator to prioritize for monitoring in a control chart could be a quality control sample or reference sample analyzed on each test occasion. If the control chart can be filtered by analyst ID then this can provide great insight into potential further training requirements.

Sources of Performance Data and Information

The general flowchart of an analytical procedure (Figure 2) reveals starting points to retrieve data and information about its performance. The selection of type of data and information depends on the given analytical technique and the analytical procedure design. For example, in the case of titrations, no replicate analysis of the same test solution is possible, or relative standard deviations (RSD) are only appropriate to be used (individually) in case of sufficient repetitions (e.g., at least six). Test results combine the unit operation steps after the validity assessment for each sample preparation/measurement. Analyzing a sample from a manufacturing process will always include both analytical and manufacturing variability.

Figure 2.

Figure 2

General flowchart of an analytical procedure, potential sources of data, and performance indicators to be considered in the routine performance monitoring program.

For the purpose of analytical performance monitoring, the variability of the manufacturing process should be excluded as much as possible. Examples of excluding manufacturing variability are the use of replicate results in batch analysis or analysis of control batches (control samples). Only the results for the control sample (acquired using the defined replication strategy for the test sample) can be compared directly to the ATP. Control samples are materials with a (well-known) matrix composition close to that of the test sample (identical to or adequately representative of the test samples, e.g., a homogeneous batch of product tested alongside each routine sample), analyzed along with test samples in order to evaluate the accuracy of the analytical procedure, helping to ensure that the analyses are properly performed so that the results obtained with test samples are reliable. They should be stable over time and be available in sufficient amounts. If the manufacturing variability is negligible with respect to the analytical variability, batch samples may also be used as “virtual” control samples (e.g., assay of many small molecule API, and for some well-established and controlled drug product manufacturing processes). All other performance attributes (including a single determination of a control sample) address only some of the unit operation steps and thus allow only an indirect comparison to ATP by alignment of the performance levels.

Data Sampling and Establishment of Assessment Frequency

As part of the pharmaceutical quality system (PQS), OOS results with a laboratory root cause (conformity indicators) are usually tracked. If the monitoring program is established as a regular part of the PQS, a monitoring plan is required for each analytical procedure to be considered. The plan should include (at least) the following:

• all defined performance indicators, according to risk

• the source of the data and monitoring tools (e.g., control and monitoring charts, etc.)

• the data sampling rate (e.g., for each release testing, for each stability testing, for each series/run, for each instrument separately, or defined only, once a week, etc.)

• the monitoring frequency (review and evaluation of indicators or control charts within a given period of time, e.g., each month, quarter depending on the volume of testing)

• monitoring rules or control limits/out-of-control rules

• calculation of long-term parameters and periodic assessment frequency

Different strategies can be used for data collection. These strategies usually include (a) information and parameters recorded in other electronic systems or entered manually in paper format, which would require a verification of the transfer from raw data or instrument data systems, and (b) use of a laboratory information management system (LIMS), facilitating data sampling and trending and mitigating data integrity issues. In the latter case, the information on validity indicators and the data for the analytical procedure performance attributes and parameters are available within a LIMS, which is able to create summary reports for each category and to generate graphical presentations or control charts, as this would avoid the transfer of data. In the case of validity information, where a graphical presentation or control charts are not essential, a paper-based recording might be a simple and efficient solution, depending on the frequency of violations. In the case of additional electronic systems, the question of data integrity should be considered. In this case, a reasonable compromise should be achieved with respect to the validation and review details as the aim is not GMP batch release or decision.

The frequency of the performance assessment of the analytical procedure should be defined. It can be defined depending on the amount of data available in order to have a sufficient number of data available for a reliable assessment. Typically, the maximum frequency of any monitoring assessment will be annually to align with the GMP requirements for an annual product review, which should include or refer to the analytical procedure assessment. In case of a higher testing frequency, the assessment can be made quarterly or biannually, but there should be ideally 20–30 batches available.26,30,31 Refer to the section Performance Assessment and Continuous Improvement of Analytical Procedures for additional discussion on performance assessment/evaluation and plan of actions.

Continuous Verification of the ATP Requirements

The ATP requirements relate to the reportable value and can therefore usually not be verified directly, being a challenge for ATP criteria verification during routine analysis. For quantitative procedures, measurement uncertainty (precision and bias) can be considered as the primary performance characteristic.5,32 A maximum allowable measurement uncertainty can be defined in the ATP as performance requirements for a given measurement, and continuous verification of the ATP requirements may help to ensure fit for use along the entire procedure life cycle. Usually, during procedure design, control strategies are defined to eliminate or minimize systematic errors, or bias.32 Precision (random variability) should be assessed in procedure design and qualification32 allowing the selection of conditions with reduced variability and determination of suitability of the analytical procedure. Each step of an analytical procedure contributes to its variability to the overall precision. Precision levels can be used to described groups of procedure sources of variation: system precision, repeatability, and intermediate precision/reproducibility32 (Figure 1).

Precision

In general, analytical performance attributes extracted from routine analysis may be used to express the precision of the reportable value, which may be directly compared with the ATP precision requirement (which includes all precision levels). However, this is only valid if a control sample is included in the monitoring program and analyzed by the same replication strategy as the test sample, allowing the precision of the reportable value to be directly obtained as a long-term average parameter and compared to the ATP precision requirement. Usually, analytical performance attributes address mainly the variability of various steps of the analytical procedure, i.e., the precision levels and not the overall procedure precision, imposing a challenge to compare few analytical performance attributes/performance directly with ATP precision criteria (e.g., RSD or ranges of injection/measurement of sample or reference standard, sample preparation, etc.). If analytical performance attributes expressing only precision levels lower than the reportable value (overall procedure variability) are available, a simple way to assess the ATP requirement for the reportable value is obtaining the average of precision levels (e.g., system precision, repeatability, or intermediate precision/reproducibility). It is worth mentioning that, in case of a reportable value for a single sample preparation, the intermediate precision/reproducibility corresponds to the precision of the reportable value.28,32 An acceptable repeatability or intermediate precision/reproducibility may even be larger than the ATP requirement, as in the case of averaging the different precision levels (variances), variability is reduced. If there are no shifts and trends in the precision parameters as well as in the average parameters, the maintenance of the validated status is verified. If the manufacturing variability is negligible with respect to the analytical variability (e.g., less than 1/4), the precision of the reportable value can be directly calculated from batch results (“virtual” control sample). This may be the case for the assay of many small molecule API, and for some well-established and controlled drug product manufacturing processes. Reliable precision can also be obtained from other sources,28 for example, by extraction from routine analysis such as duplicate injections or sample preparations,33 system suitability tests, or stability studies.34

Accuracy (Bias)

Accuracy can be addressed directly only if the control sample has been established by independent characterization. However, as the control sample results will reveal time-dependent shifts and trends, the relative accuracy can be controlled, where in the case of no shifts and trends, it may be assumed that the maintenance of the accuracy was demonstrated in validation.

Legacy Procedures without ATP

In principle, the ATP should be continually verified along the entire procedure lifecycle. This verification is challenging for legacy procedures without a defined ATP containing previously established performance requirements. For these procedures, the ATP requirement may be established retrospectively based on existing data available. Approaches for setting limits and procedure performance evaluation may include the estimation of tolerance intervals (TI) using the validation data35,36 and/or using current control data (for procedures with a long period of use and associated specifications) and comparing it to a defined range calculated using the measurement uncertainty (MU) from historical control data.23 The ATP criteria are said to be met if the width of the TI is less than the range of the MU.

Statistical Analytical Procedure Performance Monitoring and Control (SAPPMC)

Trend plots of critical procedure performance indicators (e.g., RSD from system precision checks, results from routine testing, control or stability samples, or out-of-trend investigations) can be established. A state of statistical control is achieved when only random causes of variation are present29 and departures from random variability should trigger a reaction plan and steps taken to bring the procedure back into a state of control. SAPPMC (the term coined in this article analogous to statistical process control (SPC) for product and process) is a tool for achieving this objective and can help to quickly detect shifts so that corrective and preventative actions can be actioned. Control charting is the simplest and most common technique used to implement SAPPMC.

Control Charting

Control charts are powerful tools for monitoring an analytical procedure and generally consist of an upper control limit (UCL), a center line, and a lower control limit (LCL).3739 These are defined based on data collected from the process while it is in a state of control and in alignment with the ATP. They are particularly powerful for signaling when something has changed or gone wrong or when the procedure is doing something different. To gain the most benefit, it is critical to plot data in real time and decisions made immediately with regard to the next steps. When the process is in a state of statistical control, almost all of the data points will be within the lower and upper control limits, varying randomly around the center line. When a point falls outside of the lower or upper control limits, it can be assumed that the process is out of statistical control, and this may trigger an action. Actions may be minor or major and will depend on the severity and impact of the violation. We may also add additional rules that can trigger an action before a value falling outside the control limits is observed. These rules allow the early identification of systematic and nonrandom trends that suggest the process may be shifting, even if there are no observed values outside of the limits yet. Nonrandom patterns will almost always have a root cause that may be identified with careful investigation, and corrective actions can be implemented to bring the process back into a state of statistical control. To understand the implementation of control charts, one must have a fundamental understanding of the hypothesis testing and sample distributions. Every point that is plotted on a control chart can be thought of as a hypothesis test being performed.40 If the point falls within the control limits, we fail to reject the null hypothesis that the process is in control. If the point falls outside the control limits, we reject the null hypothesis and conclude that the process is out of control, triggering our action plan. The control limits are defined based on the alpha level of our hypothesis test, thus controlling for the risk of making an incorrect decision, such as triggering an investigation when the process is actually in a state of control. The risk of stopping the analytical process for an unnecessary investigation (type I error) must be weighed against the risk of not detecting a shift in the analytical process (type II error). This risk tolerance will dictate how the control limits are defined. There are several types of control charts that are commonly used, the most common ones being listed in Table 4. The choice of which control chart to use depends on the type of data and the design of the analytical procedure (replication). The reason for selecting the type of control chart and the data used to establish the control chart should be documented.

Table 4. Control Chart Comparisons.

chart type subgroup size (n) advantages disadvantages
X-bar average ≥2 can easily detect large shifts in the mean; popular, lots of knowledge on its use, including rules to allow early detection of smaller shifts and/or trends not very sensitive to small shifts in the mean without additional rules
R variability ≥2 can easily detect large shifts, easy to compute ignores a lot of information about the variability when the sample size is large; not recommended for subgroups > 10
S variability ≥2 can easily detect large shifts; uses all information about the variability; optimal estimate of variability for larger subgroup sizes more difficult to compute and interpret than the range
individuals (X) NA 1 can detect large shifts slow to detect drifts in the mean, not ideal for detecting small shifts; requires normality assumption
Cusum average ≥1 can detect small shifts, sooner complicated; not as good as X-bar chart for detecting large shifts
EWMA average ≥1 can detect small shifts, sooner complicated; not as good as X-bar chart for detecting large shifts; requires the setting of subjective parameters

Shewhart Control Charts for Continuous Variables

If the quality characteristic is measured on a numeric scale, the most used control charts are the X-bar (, mean) and range (R) or the X-bar and standard deviation (S). Here, both the mean value and variability of the characteristic are monitored. This approach for monitoring numeric data is appropriate for normally distributed data. This approach is also valid for non-normally distributed data due to the central limit theorem, which states that the distribution of sample means approximates a normal distribution as the sample size gets larger, regardless of the population’s distribution.41 If the data are not normally distributed and small sample sizes are used, then other approaches for control charting should be explored. It should also be considered that analytical procedures usually are composed of many steps, thus resulting in an “analytical” averaging.

The X-bar () chart limits are defined as follows (eq 4, eq 5, and eq 6):

graphic file with name ac3c03708_m004.jpg 4
graphic file with name ac3c03708_m005.jpg 5
graphic file with name ac3c03708_m006.jpg 6

where μ is the mean, σ is the standard deviation, n is the sample size, and Zα/2 is the upper α/2 percentage point of the standard normal distribution. If 3-sigma limits are used, Zα/2 is simply replaced by 3. If a value falls outside of the UCL or LCL, then we would become suspicious that the mean is no longer equal to μ, since we would expect the values to fall within the control limits 99.73% of the time when using 3-sigma limits. Since μ and σ are unknown, they are estimated from preliminary samples taken while the process is in a state of statistical control, for example, at the completion of the analytical procedure validation. Assuming there are m samples available, each with n observations of the quality characteristic, let 1, 2, ..., m be the average of each sample. Thus, the estimate of μ is the mean of the means (eq 7):

graphic file with name ac3c03708_m007.jpg 7

We must also estimate σ to construct the upper and lower confidence limits, using either the standard deviations or the ranges of the m samples with n observations. When using ranges, the range for each sample m and the mean range are calculated as eq 8 and eq 9, respectively:

graphic file with name ac3c03708_m008.jpg 8
graphic file with name ac3c03708_m009.jpg 9

Using the above estimates, the control limits for the X-bar, R, S charts, and individual values charts are given in Table 3.

Table 3. Control Limits for Mean, Range, Standard Deviation, and Individual Control Charts, According to ISO 7870-2:202340 (Adapted with permission from reference (40), ISO 7870-2:2023. Copyright 2023 ISO).

Control charts for mean and range or standard deviation
  Estimated control limits
Prespecified control limitsa
Statistic Center line UCL and LCL Center line UCL and LCL
X̿ X̿ ± A2 and X̿ ± A3b μ0 μ0 ± 0
R D4, D3 d2σ0 D2σ0, D1σ0
S B4, B3 c4σ0 B6σ0, B5σ0
Control charts for individuals
  Estimated control limits
Prespecified control limits
Statistic Center line UCL and LCL Center line UCL and LCL
individual, X ± 2.660m, 0 μ0 μ0 ± 3σ0
moving range, Rmc m 3.267m, 0 1.128σ0 3.686σ0, 0
a

μ0 and σ0 are given values or parameters.

b

Only applicable when the interseries variability is negligible, i.e., repeatability equals intermediate precision.

c

Rm, the moving range, is obtained by the difference between two consecutive individual values.

The different constants (e.g., A2, D3, D4, etc.) are available for various sample sizes and are provided in the Supporting Information.

When n is small, use of the range is similar to the use of the standard deviation; however, as n increases, the range is no longer appropriate and the standard deviation should be used instead for monitoring the variability of a quality characteristic. Due to the simplicity of calculating the range and considering that samples are often small in practice, the X-bar and R plots are used commonly; however, when computer software is available, the use of the standard deviation is recommended.

According to the ISO 7870-2 standard,40 control charts with estimated limits are used to estimate whether the mean range or standard deviation values differ from their total mean “by an amount greater than that which can be attributed to chance causes only”, while control charts with prespecified limits add the requirement regarding the center and dispersion of measurements. The prespecified values may be based on experience obtained by using control charts with no prior information or information taken from previous data from procedure development or validation.

One should be aware that the X-bar control charts estimated limits are based on the range or standard deviation obtained from the series of measurements and that the interseries impact on the mean value of each of the different series is not considered. When interseries effects are not negligible, the estimated limits might not reflect the real variation of the mean values and prespecified limits or the 3-standard deviation (of the means) limits might be preferred. In this case, μ0 and σ0 can be taken as the mean and standard deviation of mean values from previous data from a control chart with estimated limits or a series of measurements obtained in intermediate precision conditions.

Cumulative-sum (cusum) Control Charts for Monitoring the Mean

Cusum chart is an alternative to a Shewhart control chart and is particularly useful for detecting small shifts and may also be used when the sample sizes are n = 1. Since the cumulative sums include information from the current sample as well as the prior samples, these charts are more sensitive than Shewhart charts for detecting small shifts. However, it should be noted that cusum control charts are not as efficient at detecting large shifts as the Shewhart control charts. Cusum charts are created by plotting

graphic file with name ac3c03708_m010.jpg

for sample i, where j is the average of the jth sample and μ0 is the target for the mean. When the analytical procedures is in a state of statistical control, the cumulative sums will vary randomly around a value of 0. If the quality characteristic mean shifts to a value > μ0, the plot will show an upward drift and conversely a downward drift if the mean shifts to a value < μ0.

Exponentially Weighted Moving-Average Control Charts

The exponentially weighted moving-average (EWMA) control chart is similar to the cusum chart in that it is sensitive to small shifts and may be used when the sample size is n = 1. The EWMA control chart is generally easier to set up than the cusum chart.

The EWMA is defined as zi = λxi + (1 – λ)zi–1, where λ is a constant between 0 and 1 and the starting value is the process target (i.e., z0 = μ0). Often the average of preliminary data is used as the starting value of the EWMA. If the observations xi are independent random variables with variance σ2, the variance of the EWMA is

graphic file with name ac3c03708_m011.jpg

Thus, the EWMA control chart is as follows (eq 10, eq 11, and eq 12):

graphic file with name ac3c03708_m012.jpg 10
graphic file with name ac3c03708_m013.jpg 11
graphic file with name ac3c03708_m014.jpg 12

where the factor L is the width of the control limits.

The choice of L and λ can be made to closely approximate the performance of a cusum chart for detecting small shifts. Montgomery39 found that values of λ in the interval 0.05 to 0.25 work well in practice, with λ = 0.05, λ = 0.10, and λ = 0.20 being popular choices. Smaller values of λ detect smaller shifts. In general, L = 3 represents the usual 3-sigma limits and works well. To match the approach of a Shewhart chart using the Western Electric rules, Stuart Hunter42 recommends a value of λ = 0.4.

In general, the EWMA is sensitive to small shifts but, similarly to the cusum, is not as sensitive as the Shewhart control chart for larger shifts. One solution to address these limitations is to use both a Shewhart chart and a cusum or EWMA chart to ensure that both small and large shifts are detected quickly.

When confronted with non-normal data such as from enumeration plates, the charts discussed above that assume normality (particularly the individual charts) will give false signals and lead to unnecessary investigations. In these cases, the above control charts may still be used; however, the data must first be normalized, for example, using a box-cox transformation for non-negative skewed data.

In addition to the control charts described above, there are charts for other data types such as attributes data (conforming vs nonconforming) and count data that are less common in the context of analytical method process control. These are outside the scope of this review; however for a comprehensive review of these charts as well as those described above, refer to ref (39).

Table 4 provides a comparison between the most common control charts.

Establishment of Trending Rules

The main goal of control charts is to check that the process under surveillance is stable; this means that the statistics plotted on the graph, be it the standard deviation, S, or the range, R, for dispersion statistics, or the individual values, X, or the mean value, X-bar, for position statistics, would stay in a defined zone. Several rules have been proposed to interpret the control charts information; they are based on limits defined by the process standard deviation, or a target standard deviation. Three zones are defined, being plus or minus 1, 2, or 3 times the standard deviation or being based on the calculated control limits. There are two well-known set of rules, the Westgard rules,43 commonly used with the individuals control charts (X charts)44 and the Western Electric rules, also known as AT&T rules, commonly used with the Shewhart control charts (X-bar, S, or R control charts).40 The latter rules are described in detail in the ISO 7870-2 standard.40 In the case of SAPPMC, it may be assumed that several measurements are available at each step; therefore the use of rules based on mean and dispersion values, such as Western Electric - ISO 7870-240 rules, could be preferred. Table 5 displays examples of these rules for control limits according to ±3s. Note that an essential prerequisite for these rules is that the results are independent. For example, analysis of several batches in the same (calibration) run would violate this assumption. Regarding the range or standard deviation control charts, it should be noticed that the distributions of the range and standard deviation are asymmetric around their mean. Nevertheless, for ease of construction, asymmetric limits can be used and a lower control limit of 0 established when the calculated lower limit is a negative value.

Table 5. Examples of Western Electric - ISO 7870-2 Rules for Control Chart Results Interpretation.

graphic file with name ac3c03708_0007.jpg

In SAPPMC, for a better interpretation of the control chart, it is always valuable to associate several rules and use multirules tools. An obvious example is the case of ISO 7870-2 rule 1 (one measurement outside the +3s or −3s zone):

  • If ISO 7870-2 rules 1 and 3 are met at the same time, this could mean that the out-of-control measurement is a consequence of the observed trend in the measurements and an investigation for an assignable cause to this trend should be conducted.

  • If ISO 7870-2 rule 1 is met at the same time as rule 4, 5, or 8, this could mean that the out-of-control measurement is a consequence of the observed change in the analytical procedure measurements dispersion, and an investigation for an assignable cause for the increase in measurements dispersion should be conducted.

  • If no other rule is met at the same time, this could mean that an error has been made in the analytical procedure application, and an investigation for an assignable cause should be conducted for this result.

Therefore, an interpretation pattern may be proposed for interpretation of the SAPPMC control charts. If a dispersion of measurements control chart, either an S or R control chart, is available together with a mean control chart, it should be examined first to identify unusual trends or patterns in the measurements dispersion, as changes in measurements dispersion might probably explain unexpected behaviors of the mean values.

An example template for control chart interpretation is given in Figure 3, in the case of Shewhart control charts applied to SAPPMC data (i.e., system suitability or quality control check sample data).

Figure 3.

Figure 3

Control chart interpretation template, based on ISO 7870-2 (Western Electric) rules. “Yes” indicates that the considered rule is met.

Estimation of Long-Term Precision

Besides graphical presentation and monitoring of analytical performance attributes and parameters, these data can be used to calculate average parameters. In the (the usual) case of replication parameters, the respective precision levels are addressed. For some examples, see Table 6. Due to the large number of data that become available over time, these parameters are statistically very reliable and an excellent estimation of the true (population) parameters. However, results impacted by special cause variation (proven or made likely through an investigation) should not be included in the calculation. As long as no changes occur that impact the respective parameter, the calculation should proceed in an ongoing manner, increasing the reliability. Often, the average parameters are also used to establish limits for monitoring or control charts. In these cases, “overadjustment” should be avoided, i.e. the limits should only be adjusted in case of relevant changes.

An example of the estimation of long-term precision is provided in the Supporting Information.

Performance Assessment and Continuous Improvement of Analytical Procedures

A monitoring program will often include parameters or attributes that only have the potential to impact procedure performance (in addition to those parameters with a confirmed link to performance) and should clearly outline alert limits/rules and hard action limits (control limits). It is important to note that not all trend alerts require formal investigation, deviation, and corrective/preventative actions. Context is very important when deciding on the appropriate action to take in response to an alert. During procedure routine use, different types of variation may occur:45

• Special cause variation (e.g., new and/or unexpected variation): these can be due to signal shifts, drift, and observations which might fall well above or below the performance requirement.

• Acceptable common cause: the procedure is considered fit-for-purpose when all observations are within the predefined upper and lower established limits. Usually, common cause variation does not necessarily trigger an action and is acceptable (no special cause variation exists).

• Unacceptable common cause variation: occurs when few observations fall outside the performance requirement (e.g., a noise variable has been found to have an impact on routine performance and not during procedure development stage and lack of risk management and proper investigation of variation origin affecting performance).

During the investigation, particular attention should be given to special cause variation (shifts, drift, and deviation) and unacceptable common cause variation (unacceptable noise). The violation of alert limits and rules may not necessarily require immediate actions, and this should be justified in the monitoring plan. In the case of violation of “hard” limits/rules, an investigation must be initiated to identify the root cause. If a trend alert has no confirmed correlation with procedure performance, and there are no simultaneous alerts for critical procedure performance attributes, or out of trend (OOT) batch data in the trending program, then the most pragmatic course of action may be to check for any worsening trends at the next scheduled routine monitoring check (e.g., if an OOT peak capacity is observed during routine monitoring of a chromatographic method, but the resolution of the critical pair of peaks is on trend, then there may not be an immediate benefit in formal investigation at this time, but it would be advisible to monitor both peak capacity and critical pair resolution more closely at subsequent scheduled monitoring checks). In contrast, if there is a confirmed correlation with procedure performance or an observed impact on batch data, then a formal OOT investigation should be conducted, in line with GMP.

As a result of the overall performance assessment, suspect but noncritical behavior or shifts in the long term parameter may be further investigated, and the monitoring plan and control strategy may be updated to promote continuous improvement of procedure performance. Small changes in control limits should be avoided (“over-adjustment”). Procedure performance improvement could be relatively small like improvements to training plans or the wording in method documentation to improve clarity and consistency of procedure operation. It could relate to instrument or reference standard management or sometimes more fundamental procedure changes which require regulatory postapproval change, e.g., changes to critical analytical procedure parameters or analytical technology. Opportunities for improvement can be flagged (Figure 4) during the periodic review (e.g., during scheduled revisit of the analytical procedure risk assessment and PPC&ER) or in response to a procedure performance trend alert during routine performance monitoring (e.g., using SAPPMC).

Figure 4.

Figure 4

Analytical procedure performance monitoring process and opportunities for improvement.

Below are some real examples of performance monitoring in action. In all cases, the benefits of proactive monitoring and improvement action have been shown to result in increased efficiency, reduced QC downtime, and ultimately more robust supply chains. Other examples were recently published.22

Example - HPLC Assay Procedure Performance Monitoring (Reference Standard Monitoring)

An out of trend (OOT) deviation event resulted in significant delays for release of drug product batches and wasted QC time and resources. PPC&ER determined the root cause to be a problem with the new reference standard batch, which is out of trend by comparison with previous batches in Figure 5A. A calibration standard peak area response has now been added to the routine procedure performance monitoring plan for this assay in order to provide an early detection mechanism for any potential future challenges regarding this unusually complex reference standard. Control charts with preset control limits will allow detection of early signals of drift, enabling reduced rates of initial OOT/OOS due to procedure error through proactive action.

Figure 5.

Figure 5

Procedure performance monitoring: (A) HPLC assay and (B) UV assay.

Example - UV Assay Procedure Performance Monitoring

Multiple OOT results were generated when new analysts began performing the UV assay test. This resulted in delayed test results and wasted QC time for deviations and CAPAs that did not address the root cause. Proactive monitoring of the percent nominal response for a quality control sample now provides great insight into test occasion to test occasion procedure variability. Trend rule violations (e.g., data points outside of 3SD from the mean (red line)) are a signal that a pattern in the data may be unusual and should be looked at more closely (Figure 5B). Filtering using metadata, e.g., color-coding by analyst, reagent batch, instrument, etc., can help us to understand potential root cause. Trending by analyst can identify a need for supplementary training (useful for methods such as this one with complex sample preparation steps).

Conclusion and Outlook

Ongoing procedure performance verification provides data and information about the procedure performance over time. The risk category of analytical procedures, usually established based on their complexity and intended purpose, can drive the selection of appropriate analytical procedure performance indicators and their assessment frequency. The extent of the routine monitoring required should be defined based on risk assessment, where the performance of a procedure performance cause and effect review (PPC&ER) provides a useful way to identify the most value adding procedure attributes and/or performance parameters. The application of traditional SAPPMC in the verification of procedure performance on a continuous basis can be very useful tools to help identify adverse trends and detect shifts that may affect analytical procedure performance so that corrective and preventative actions (action plan) can be implemented to ensure that the procedure continues to be fit for use. It is worth noting that not all trend alerts require formal investigation or deviation and corrective/preventative actions, the context is very important when deciding on the appropriate action to take in response to an alert. Among the SAPPMC tools available, control charts are important and powerful for ongoing monitoring of critical procedure performance attributes/parameters, which can also help highlight opportunities for proactive improvement of procedure performance over time. The ATP can be used as an overarching element which ties together all stages and activities that take place within the APLC. The ATP requirements relate to the reportable value and can therefore usually not be verified directly, being a challenge for ATP criteria verification during routine analysis. This paper shows that ATP performance criteria can be verified directly (e.g., trending precision and accuracy of the results using control sample obtained using the same replication strategy as for the sample) or indirectly through ensuring key analytical procedure attributes linked to the ATP are within appropriate limits. Different document types and programs in PQMS may also be leveraged to guide organizations on how to design, execute, and document analytical procedure lifecycle management, through policies, SOPs, job aids, and training programs. In summary, the knowledge acquired during stage 3 can be used to react to changes in procedure performance, to plan and implement changes, to improve the procedure, to communicate to regulatory authorities, and to help identify the need for the ACS review, i.e., procedure performance verification is part of the overall procedure risk management process.

Acknowledgments

This publication was supported by the US Pharmacopeia Measurement and Data Quality Expert Committee. The content is solely the responsibility of the authors and does not necessarily represent the official views of the United States Pharmacopeia.

Supporting Information Available

The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.analchem.3c03708.

  • Factors for constructing Shewhart control charts (X, X-bar, R, and S charts) for continuous variables (PDF)

  • Example of estimation of long-term precision for an HPLC-UV compendial assay procedure without initial ATP (PDF)

The authors declare no competing financial interest.

Supplementary Material

ac3c03708_si_001.pdf (228.4KB, pdf)
ac3c03708_si_002.pdf (255.5KB, pdf)

References

  1. Jackson P.; Borman P.; Campa C.; Chatfield M.; Godfrey M.; Hamilton P.; Hoyer W.; Norelli F.; Orr R.; Schofield T. Using the Analytical Target Profile to Drive the Analytical Method Lifecycle. Anal. Chem. 2019, 91, 2577–2585. 10.1021/acs.analchem.8b04596. [DOI] [PubMed] [Google Scholar]
  2. Borman P.; Campa C.; Delpierre G.; Hook E.; Jackson P.; Kelley W.; Protz M.; Vandeputte O. Selection of Analytical Technology and Development of Analytical Procedures Using the Analytical Target Profile. Anal. Chem. 2022, 94 (2), 559–570. 10.1021/acs.analchem.1c03854. [DOI] [PubMed] [Google Scholar]
  3. Barnett K.; Chestnu S.; Clayton N.; Cohen M.; Ensing J.; Graul T.; Hanna-brown M.; Harrington B.; Morgado J.. Defining the Analytical Target Profile. Pharm. Eng. 2018. https://ispe.org/pharmaceutical-engineering/march-april-2018/defining-analytical-target-profile (accessed November 7, 2023). [Google Scholar]
  4. Schweitzer M.; Pohl M.; Hanna-Brown M.; Nethercote P.; Borman P.; Hansen G.; Smith K.; Larew J. Implications and Opportunities of Applying QbD Principles to Analytical Measurements. Pharm. Technol. 2010, 34 (2), 52–59. [Google Scholar]
  5. USP General Chapter <1220> Analytical Procedure Lifecycle, 2022. https://online.uspnf.com/uspnf/document/1_GUID-35D7E47E-65E5-49B7-B4CC-4D96FA230821_2_en-US?source=Search%20Results&highlight=1220 (accessed November 10, 2023).
  6. Barnett K. L.; Mcgregor P. L.; Martin G. P.; Leblond D. J.; Weitzel M. L. J.; Ermer J.; Walfish S.; Nethercote P.; Gratzl G. S.; Kovacs E.. Analytical Target Profile: Structure and Application Throughout the Analytical Lifecycle. Pharm. Forum. 2016, 42 ( (5), ). [Google Scholar]
  7. Rignall A.; Borman P.; Hanna-Brown M.; Grosche O.; Hamilton P.; Gervais A.; Katzenbach S.; Wypych J.; Hoffmann J.; Ermer J.; McLaughlin K.; Uhlich T.; Finkler C.; Liebelt K. Analytical Procedure Lifecycle Management: Current Status and Opportunities. Pharm. Technol. 2018, 42 (12), 18–23. [Google Scholar]
  8. Guiraldelli A. M.; Lourenço F. R.; Borman P.; Weitzel J.; Roussel J. M.. Analytical Target Profile (ATP) and Method Operable Design Region (MODR). In Introduction to Quality by Design in Pharmaceutical Manufacturing and Analytical Development; Breitkreitz M. C., Goicoechea H. C., Eds.; Springer, 2023; pp 199–219. [Google Scholar]
  9. Guiraldelli A. M.; Lourenço F. R.; Borman P.; Weitzel J.; Roussel J. M.. Analytical Quality by Design Fundamentals and Compendial and Regulatory Perspectives. In Introduction to Quality by Design in Pharmaceutical Manufacturing and Analytical Development; Breitkreitz M. C., Goicoechea H. C., Eds.; Springer, 2023; pp 163–198. [Google Scholar]
  10. ICH Guideline Q2(R1) Validation of Analytical Procedures: Text and Methodology. 2005. https://database.ich.org/sites/default/files/Q2%28R1%29%20Guideline.pdf (accessed November 10, 2023).
  11. USP General Chapter <1225> Validation of Compendial Procedures. https://online.uspnf.com/uspnf/search?facets=%5B%5B%22document-status_s%22%2C%5B%22Official%22%2C%22To%20Be%20Official%22%2C%22Commenting%20open%22%5D%5D%5D&query=1225 (accessed November 10, 2023).
  12. Nethercote P.; Borman P.; Bennett T.; Martin G.; McGregor P.. QbD for Better Method Validation and Transfer. Pharma Manufacturing 2010. https://www.pharmamanufacturing.com/development/qbd/article/11360549/qbd-for-better-method-validation-and-transfer (accessed November 7, 2023). [Google Scholar]
  13. Guiraldelli A.; Weitzel J.. Evolution of Analytical Procedure Validation Concepts: Part I – Analytical Procedure Life Cycle and Compendial Approaches. Pharm. Technol. 2022. https://www.pharmtech.com/view/evolution-of-analytical-procedure-validation-concepts-part-i-analytical-procedure-life-cycle-and-compendial-approaches (accessed November 7, 2023). [Google Scholar]
  14. FDA. Guidance for Industry. Process Validation: General Principles and Practices. 2011. https://www.fda.gov/files/drugs/published/Process-Validation--General-Principles-and-Practices.pdf (accessed November 7, 2023).
  15. ICH Guideline Q8(R2) Pharmaceutical Development. 2009. https://database.ich.org/sites/default/files/Q8%28R2%29%20Guideline.pdf (accessed November 7, 2023).
  16. ICH Guideline Q9 Quality Risk Management. 2005. https://database.ich.org/sites/default/files/ICH_Q9%28R1%29_Guideline_Step4_2023_0126_0.pdf (accessed November 7, 2023).
  17. ICH Guideline Q10 Pharmaceutical Quality System. 2008. https://database.ich.org/sites/default/files/Q10%20Guideline.pdf (accessed November 7, 2023).
  18. ICH Guideline Q11 Development and Manufacture of Drug Substances (Chemical Entities and Biotechnological/Biological Entities). 2012. https://database.ich.org/sites/default/files/Q11%20Guideline.pdf (accessed November 7, 2023).
  19. ICH Guideline Q12 Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management. 2019. https://database.ich.org/sites/default/files/Q12_Guideline_Step4_2019_1119.pdf (accessed November 7, 2023).
  20. ICH Draft Guideline Q13 Continuous Manufacturing of Drug Substance and Drug Products. 2021. https://database.ich.org/sites/default/files/ICH_Q13_Step4_Guideline_2022_1116.pdf (accessed November 7, 2023).
  21. ICH Draft Guideline Q14 Analytical Procedure Development. 2022. https://database.ich.org/sites/default/files/ICH_Q14_Document_Step2_Guideline_2022_0324.pdf (accessed November 7, 2023).
  22. Borman P.; Guiraldelli A.; Weitzel J.; Thompson S.; Ermer J.; Sproule S.; Roussel J.-M.; Marach J.; Pappa H. Ongoing Analytical Procedure Performance Verification - Stage 3 of USP <1220>. Pharm. Technol. 2023, 40–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Ermer J.; Aguiar D.; Boden A.; Ding B.; Obeng D.; Rose M.; Vokrot J. Lifecycle Management in Pharmaceutical Analysis: How to Establish an Efficient and Relevant Continued Performance Monitoring Program. J. Pharm. Biomed. Anal. 2020, 181, 113051 10.1016/j.jpba.2019.113051. [DOI] [PubMed] [Google Scholar]
  24. Chatfield M. J.; Borman P. J. Acceptance Criteria for Method Equivalency Assessments. Anal. Chem. 2009, 81, 9841–9848. 10.1021/ac901944t. [DOI] [PubMed] [Google Scholar]
  25. Vukovinsky K.; Watson T. J.; Ide N. D.; Wang K.; Dirat O.; Subashi A. K.; Thomson N. M. Analytics Statistical Tools to Aid in the Assessment of Critical Process Parameters. Pharm. Technol. 2016, 40 (3), 34–44. [Google Scholar]
  26. Borman P. J.; Chatfield M. J.; Damjanov I.; Jackson P. Method Ruggedness Studies Incorporating a Risk Based Approach: A Tutorial. Anal. Chim. Acta 2011, 703, 101–113. 10.1016/j.aca.2011.07.008. [DOI] [PubMed] [Google Scholar]
  27. Borman P.; Schofield T.; Lansky D. Reducing Uncertainty of an Analytical Method through Efficient Use of Replication. Pharm. Technol. 2021, 45 (4), 48–56. [Google Scholar]
  28. Ermer J.; Agut C. Precision of the Reportable Result. Simultaneous Optimisation of Number of Preparations and Injections for Sample and Reference Standard in Quantitative Liquid Chromatography. J. Chromatogr. A 2014, 1353, 71–77. 10.1016/j.chroma.2014.03.043. [DOI] [PubMed] [Google Scholar]
  29. Ishikawa K.What Is Total Quality Control? The Japanese Way; Prentice-Hall: Englewood Cliffs, 1985. [Google Scholar]
  30. Wheeler D. J.; Chambers D. S.. Understanding Statistical Process Control, 3rd ed.; Statistical Process Controls, 2011. [Google Scholar]
  31. Borman P. J.; Chatfield M. J.; Damjanov I.; Jackson P. Design and Analysis of Method Equivalence Studies. Anal. Chem. 2009, 81, 9849–9857. 10.1021/ac901945f. [DOI] [PubMed] [Google Scholar]
  32. Burdick R. K.; Ermer J. Precision of the Reportable Value - Statistical Optimization of the Number of Replicates. J. Pharm. Biomed. Anal. 2019, 162, 149–157. 10.1016/j.jpba.2018.08.062. [DOI] [PubMed] [Google Scholar]
  33. Ermer J.; Ploss H. J. Validation in Pharmaceutical Analysis. Part II: Central Importance of Precision to Establish Acceptance Criteria and for Verifying and Improving the Quality of Analytical Data. J. Pharm. Biomed. Anal. 2005, 37 (5), 859–870. 10.1016/j.jpba.2004.06.018. [DOI] [PubMed] [Google Scholar]
  34. Ermer J.; Arth C.; de Raeve P.; Dill D.; Friedel H. D.; Höwer-Fritzen H.; Kleinschmidt G.; Köller G.; Köppel H.; Kramer M.; Maegerlein M.; Schepers U.; Wätzig H. Precision from Drug Stability Studies: Investigation of Reliable Repeatability and Intermediate Precision of HPLC Assay Procedures. J. Pharm. Biomed. Anal. 2005, 38 (4), 653–663. 10.1016/j.jpba.2005.02.009. [DOI] [PubMed] [Google Scholar]
  35. USP General Chapter <1210> Statistical Tools for Procedure Validation. 2018. https://online.uspnf.com/uspnf/document/1_GUID-13ED4BEB-4086-43B5-A7D7-994A02AF25C8_7_en-US?source=Search%20Results&highlight=1210 (accessed November 7, 2023).
  36. Feinberg M.; Boulanger B.; Dewé W.; Hubert P. New Advances in Method Validation and Measurement Uncertainty Aimed at Improving the Quality of Chemical Data. Anal. Bioanal. Chem. 2004, 380, 502–514. 10.1007/s00216-004-2791-y. [DOI] [PubMed] [Google Scholar]
  37. Wernimont G. Use of Control Charts in Analytical Laboratory. Ind. Eng. Chem. 1946, 18 (10), 587–592. 10.1021/i560158a001. [DOI] [Google Scholar]
  38. Howarth R. Quality Control Charting for the Analytical Laboratory. Part 1. Univariate Methods. A Review. Analyst 1995, 120, 1851–1853. 10.1039/an9952001851. [DOI] [Google Scholar]
  39. Montgomery D. C.Introduction to Statistical Quality Control, 8th ed.; Wiley, 2019. [Google Scholar]
  40. International Standard . ISO 7870-2. Control Charts - Part 2: Shewhart Control Charts, 2nd ed.; International Organization for Standardization, 2023.
  41. Kwak S.; Kim J. Central Limit Theorem: The Cornerstone of Modern Statistics. Korean J. Anesthesiol. 2017, 70, 144–156. 10.4097/kjae.2017.70.2.144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Stuart Hunter J. A One-Point Plot Equivalent to the Shewhart Chart with Western Electric Rules. Qual Eng. 1989, 2 (1), 13–19. 10.1080/08982118908962690. [DOI] [Google Scholar]
  43. Westgard J. O.; Barry P. L.; Hunt M. R.; Groth T. A Multi-Rule Shewhart Chart for Quality Control in Clinical Chemistry. Clinical Chemistry 1981, 27, 493–501. 10.1093/clinchem/27.3.493. [DOI] [PubMed] [Google Scholar]
  44. Levey S.; Jennings E. R. The Use of Control Charts in the Clinical Laboratory″. American. J. Clin. Pathol. 1950, 20 (11), 1059–1106. 10.1093/ajcp/20.11_ts.1059. [DOI] [PubMed] [Google Scholar]
  45. Martin G. P.; Barnett K. L.; Burgess C.; Curry P. D.; Ermer J.; Gratzl G. S.; Hammond J. P.; Herrmann J.; Kovacs E.; LeBlond D. J.; LoBrutto R.; McCasland-Keller A. K.; McGregor P. L.; Nethercote P.; Templeton A. C.; Thomas D. P.; Weitzel J.. Lifecycle Management of Analytical Procedures: Method Development, Procedure Performance Qualification, and Procedure Performance Verification. Pharm. Forum 2013, 39 ( (5), ). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ac3c03708_si_001.pdf (228.4KB, pdf)
ac3c03708_si_002.pdf (255.5KB, pdf)

Articles from Analytical Chemistry are provided here courtesy of American Chemical Society

RESOURCES