Skip to main content
Bioanalysis logoLink to Bioanalysis
. 2024 Mar 1;16(9):365–367. doi: 10.4155/bio-2023-0248

Bioanalytical Quality Assurance: a Tesla with a trafficator?

Anthony B Jones 1,*
PMCID: PMC11216612  PMID: 38426341

This paper describes some opportunities to evolve further toward modern, risk-based, bioanalytical quality assurance. My intent is not to cast aspersions on bioanalytical quality or bioanalysis, we can be proud of the advances in quality and technology over the past 50 years. However, the persistence of mental models formed during the 20th Century imposes constraints on today's quality and compliance in the bioanalytical laboratory. One of these is the expectation that quality can be ‘audited-in’ by a Quality Assurance Unit (QAU). Among the root causes of this are the US FDA's Good Laboratory Practice (GLP) regulations, first established in the 1970s to address an urgent need for standards to assure the reliability of nonclinical data [1]. The challenges faced in drug development have now changed – as have technology, knowledge and processes – causing misalignment between GLP's view of quality assurance and today's reality. To illustrate this effect using a historical metaphor, consider the advent of the Ford Model T, a pivotal moment in the shift from horse to automobile. In designing the Model T's directional change indicator, the change in perspective (the ‘second half of change’ [2]) lagged behind the actual change in automotive technology. The mental model of a rider on horseback led to the design of a small indicator stick that protruded from the appropriate side of the vehicle, the ‘trafficator’, mimicking the horse rider's arm being held out to show their intent to move left or right. Of course, this was a near-useless means of signaling, especially in low light or other conditions of poor visibility. A shift in perspective only occurred several years later, when it was realized that automobiles, unlike horses, could be painlessly fitted with electric indicator lights on both flanks, front and rear, to provide a clear indication of directional intent.

So it is with bioanalytical quality assurance. Our quality management systems and information-handling technology are no longer the horses we trotted around on in the mid-70s. Viewed from today's quality management perspective, we are now driving the figurative equivalent of a Tesla, a sleek modern data-driven vehicle fitted with:

  • Quality-by-design [3] as a guiding principle.

  • Quality management systems (QMSs) working in an integrated fashion to assure quality [4].

  • Validated electronic workflows, eliminating manual entry and human intervention.

  • Quality control (QC), the inspectional component of the QMS that ensures product quality [5].

Unfortunately, there remains a trafficator on some of these bioanalytical ‘Teslas’: the notion of ‘QA'd’ data, and reports that have been subject to ‘audit’ by the QAU to ensure they reflect the raw data. Like the Model T's trafficator, this artifact of prior thinking is not effective in achieving its intended purpose. It also stems from the ‘second half of change’ not occurring fully, manifest in the slowness of GLP (and accordingly, GCLP) to modernize its concept of quality assurance. The original GLP vision was to have the QAU as an independent internal entity, a ‘mini-FDA’, within each company, verifying the integrity and trustworthiness of study data. This view is still prevalent, with quality assurance in the bioanalytical laboratory being perceived as a group that ‘QA's’ datasets and reports to ensure they are error free. This is counter-productive, undermining bioanalytical quality and compliance in four principal ways:

  • Constraining Quality Assurance: Quality Assurance (QA) is the “part of quality management focused on providing confidence that quality requirements will be fulfilled” [5], i.e., that processes are capable of delivering quality, rather than inspection of products to ensure quality requirements were fulfilled. The latter responsibility to check the details of every study must be the remit of the groups performing the studies, both to ensure that quality problems are addressed during study performance, and to establish a culture where operational teams lead for, and are accountable for, quality and compliance. This includes QC programs and the associated data from such programs which can be used to refine and improve processes. QA should remain focused on identifying potential ‘errors that matter’, the bigger issues created by dynamic change or process deficiencies. Constraining QA to devote a significant amount of resources to repetitive review of details whose quality has been assured by a QMS represents a huge opportunity cost. QA groups need the liberty to realize their full potential in assuring quality, including audits of systems and processes, evaluating new technology and process changes, contributing to effective CAPA programs and quality improvement, and fostering an organizational culture of quality.

  • Shifting the burden: QA audit of deliverables is an example of the systems thinking ‘shifting the burden’ archetype [6]. According to this archetype, shifting the burden for assuring quality from the operational groups to an external party (the QAU) will produce short-term improvements as the symptoms of quality problems are remediated. Nevertheless, as the pressure on the operational teams to improve is reduced, the root causes go unresolved, and quality will degrade over the longer term.

  • Insufficient focus on systemic issues: auditing individual studies makes it more difficult to recognize trends and improve processes, as this promotes a study-centric focus rather than a holistic view across processes and their interconnections.

  • Operational burden: maintenance of such an audit program represents a considerable workload not only for QA, but also for business partners who must respond to a high volume of audit reports and findings. This is another significant opportunity cost, where laboratory operations time could be invested in process and quality improvement.

If the bioanalytical industry continues to cling to the idea of ‘QA'd’ data, it will remain in its state of cognitive dissonance, simultaneously believing the cliché that “you can't inspect quality in” while requiring that a QAU inspect quality in at the end of the bioanalytical process. One way forward could be an industry-wide reset of what the term ‘QA'd’ represents. If we collectively understand that this no longer means that a deliverable was audited by the QAU, but rather that its quality has been assured by a QMS, we would be more aligned with modern quality management principles. This definition would be consistent with the GCP view that quality assurance is: “all those planned and systematic actions that are established to ensure that the trial is performed, and the data are generated, documented (recorded), and reported in compliance…” [7]. This shift in understanding might be one small step for semantics but one potentially giant leap for quality and compliance (with apologies to Neil Armstrong…).

There are two other vestigial mindsets which remain prevalent in bioanalytical quality. One is the use of sampling for data verification, selecting a certain percentage of results for QC or QA review. This is suboptimal as this methodology does not include any element of risk to direct review, and it ignores the ability of today's electronic data handling systems to allow rapid visualization of datasets, and their metadata, in their entirety. There can still be a role for statistical sampling, but the mindset that promotes its use is that of the auditor faced with a thick paper report full of tables, where they must select a certain number of the results to compare with raw data, diligently highlighting each number checked. However, the errors that matter are not discrete errors in particular data points, they are anomalous patterns and trends indicating deeper effects that cannot be seen with the report and highlighter pen. The development of methods for visualizing data to detect such trends within and across studies [8,9], using automated data integrity flags where possible, should be encouraged in lieu of statistical sampling. These techniques can also be applied to metadata, such as audit trails, to look for patterns that may signal data integrity issues.

The second persistent paradigm which, like GLP, has its origins in originally worthy and necessary aims is that “labs shouldn't be looking at data”, meaning that bioanalytical laboratories should not be making decisions based on resultant pharmacokinetic profiles. While it is true that acceptance or rejection of data should not be biased by looking at results, a current challenge is improving communication and mutual understanding between all stakeholders in drug development. The errors that really matter in bioanalysis, including the problems that have led to high-profile closures of bioanalytical laboratories, are often detected too late by regulatory agency reviewers seeing inconsistencies in data during review of submissions [10]. The laboratory has a role to play in the early detection of such discrepancies, working in conjunction with pharmacokineticists and clinicians to ensure data quality and guide decision-making.

Collectively shifting our perspective to remove such trafficators from the splendid bioanalytical ‘Tesla’ might help unleash the true potential of QA professionals and yield disproportional benefits for our industry and the patients it serves.

Financial disclosure

The author has no financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

Competing interests disclosure

The author has no competing interests or relevant affiliations with any organization or entity with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, stock ownership or options and expert testimony.

Writing disclosure

No writing assistance was utilized in the production of this manuscript.

References

  • 1.Food and Drug Administration Searle Investigation Task Force . Report of preclinical (animal) studies of G.D. Searle Company Skokie: Illinois (1976). [Google Scholar]
  • 2.De Brabandere L. The forgotten half of change: achieving greater creativity through changes in perception. Diversion Books (2016). [Google Scholar]
  • 3.Clinical Trials Transformation Initiative . CTTI recommendations: Quality by Design (2015). tracs.unc.edu/docs/regulatory/CTTI_Recommendations-Quality_by_Design.pdf
  • 4.American Society for Quality . American Society for Quality What is a quality management system (QMS)? https://asq.org/quality-resources/quality-management-system
  • 5.American Society for Quality . Quality Assurance & Quality Control, Quality glossary definition: Quality Assurance/Quality Control (QA/QC). https://asq.org/quality-resources/quality-assurance-vs-control
  • 6.Senge P. The fifth discipline: the art and practice of the learning organization. New York: Doubleday, revised & updated edition (2006). [Google Scholar]
  • 7.International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use . ICH harmonised guideline: integrated addendum to ICH E6(R1): Guideline for Good Clinical Practice E6(R2). Fed. Reg. 82(143), 30829–30872 (2017). [Google Scholar]
  • 8.Kaza M, Rudzki P J. Visualizing bioanalytical methods – a near or distant future? Bioanalysis 12(7) (2020). [DOI] [PubMed] [Google Scholar]
  • 9.Fuglsang A. Detection of data manipulation in bioequivalence trials. European J. Pharmaceut. Sci. 156, 105595 (2021). [DOI] [PubMed] [Google Scholar]
  • 10.US Food and Drug Administration, Department of Health and Human Services . Letter to Krathish Bopanna (2016). https://www.fda.gov/media/97424/download

Articles from Bioanalysis are provided here courtesy of Taylor & Francis

RESOURCES