Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Oct 3.
Published in final edited form as: ACS Nano. 2022 Dec 27;17(1):4–11. doi: 10.1021/acsnano.2c09249

On the issue of reliability and repeatability of analytical measurement in industrial and academic nanomedicine

Shahriar Sharifi 1,*, Nigel F Reuel 2,3, Nathaniel E Kallmyer 3, Ethan Sun 4, Markita P Landry 4,5,6,7,*, Morteza Mahmoudi 1,*
PMCID: PMC10546893  NIHMSID: NIHMS1932841  PMID: 36573831

Abstract

The issue of reliability and repeatability of data in the nanomedicine literature is a growing concern among stakeholders. This comment discusses the key differences between academia and industry in the reproducibility of data acquisition and protocols in the field of nanomedicine. We also discuss what academic researchers can learn from systems implemented in industry to standardize data acquisition, and in which ways these can be efficiently adopted by the academic community.

Keywords: standards, nanomedicine, reproducibility, repeatability, medical devices, quality control, good manufacturing practice

Graphical Abstract

graphic file with name nihms-1932841-f0001.jpg


Through the advancement of nanomedicine over the past decade, there have been major efforts among scientific publications to assess the reproducibility of experimental findings. However, studies assessing experimental reproducibility, especially in the field of nanomedicine, broadly defined as the use of nanoparticles for clinical purposes, have produced concerning results; the reliability/repeatability issue in nanomedicine is more evident in academic research papers than in industry. It is noteworthy that i) the reliability/repeatability issue is not limited to nanomedical studies, and ii) even papers published in very prestigious journals have been reported to have methodological reliability and reproducibility issues.1 However, the problem may seem more pronounced in the nanomedicine field due to its multidisciplinary nature. Nanomedicine is generally defined as the use of nanomaterials in diagnosis, monitoring, control, prevention and treatment of diseases2 and shares many characterization techniques and methodologies (e.g., advanced imaging and ‘omics) with other fields such as material science, chemistry, biology, physics, and pharmaceutical sciences. As such, it becomes challenging to establish broad standards across all areas of nanomedicine research.

The issues of reproducibility and inconsistency of nanomedicine results has multiple facets. The expectation of having reproducible or replicable results in the nanomedicine literature, while basic definitions of these parameters are inconsistent, can be misleading. First, reproducibility and repeatability are used interchangeably; however, they are not the same. The result of an experiment is reproducible if it can be arrived at by a different team using the same experimental data and methods, the same type of instrumentation, and under the same operating conditions. In other words, the result of an experiment is reproducible if the same outcome can be obtained in different labs.

Repeatability, on the other hand, means that the same results can also be obtained by different teams using different instrumentation (e.g., characterization tools) and under variable operating conditions, or by the same team on different days.3 Differences in analytical techniques and instrument operating conditions that are used to assess the performance of nanoparticles (NPs) used in nanomedicine formulations contribute to such repeatability and reproducibility issues. For example, each of the physicochemical properties of NPs (e.g., size, shape, and surface charge) can contribute to the performance of a nanomedicinal technology, yet the variability in assessing these physiochemical properties across research studies makes it difficult to understand the underlying phenomena that drive NP interactions with biological systems. Furthermore, it can be challenging to perform quality control (QC) at many points in an experiment or instrument’s operation: unlike macroscopic/visual QC validations that occur for mechanical assembly lines, nanoparticles require specialized instrumentation and thus each QC step is costly and timely.

There have been numerous academic efforts to improve the reliability/repeatability of the nanomedicine literature. Factors such as the lack of proper characterization of nanomaterials45 and employment of improper analytical methodologies have been identified as contributing to this issue in general with particularly poor outcomes for translational clinical outcomes. However, other issues such as the lack of a systematic reporting system for various parameters in nanomedicine studies have also been held responsible. For example, recently the use of a reporting checklist, Minimum Information Reporting in Bio-Nano Experimental Literature (MIRIBEL), as a requirement for publication, has been proposed as an effective way to address the reliability/repeatability issue.5 A standard operating procedure, i.e., a description of regularly recurring operations relevant to the quality of the investigation, has also been proposed as an effective tool in increasing reliability/repeatability in the nanomedicine literature. Studies have shown that in the lack of standard/shared guidelines on sample preparation, even for relatively monodispersed samples, and use of an instrument, even for basic particle size determination, causes significant variability across laboratories. One strategy that may improve the consistency and reproducibility in nanomedicine is to use standard methodological and characterization techniques for evaluations of nanomaterials and their interactions with biosystems. Details of these standards are available elsewhere6.

Why do these solutions remain ineffective or only narrowly implemented? Expert responses from the nanomedicine community, despite supporting the initial goals of the MIRIBEL checklist, demonstrated the need for a list covering much a broader range of the aspects of nanomedicine, including controlling the quality of raw materials and implementation of a detailed quality characterization protocol.78 Besides the checklist only ensures on availability of minimum reporting information on material characterization, biological characterization, and experimental details, and not the quality/robustness of employed techniques/methodologies. Critical information, such as type and methodology of instrumentation, validation requirements of characterization methods or instrumentation, source of raw materials, and many other critical pieces of information need to be reported to ensure comparability and can vary across different nanoparticle types.

Key differences in scientific rigor and reproducibility in academia or industry

One might assume that if unreliability or lack of reliability/repeatability were major barriers in nanomedicine, few products could pass the tough regulatory standards required to reach the market. However, data from medical products on the market reveals many medical products using nanotechnology. For example, according to the Food and Drug Administration (FDA) database of medical devices, about 2586 different ‘nano’ medical devices were sold in the US from 1980 to 2017.9 These mainly included in vitro diagnostic devices such as nanosilver or nanogold particle-based diagnostics, orthopedic and dental implants with nanostructure surfaces, wound dressings containing silver nanoparticles or nanofibers, nano calcium phosphates bone void fillers, stents with nanomaterial coating, vascular grafts using nanomaterial coatings, or catheters coated with silver or other nanomaterials. Therefore, it is likely that industry standards regarding experimental standard operating procedures and quality control may contribute to the successes of nanotechnology in industry in a manner that could be useful to academic research in this spare. Figure 1 presents the key differences in reliability/repeatability of nanomedicine procedures between academia and industry.

Figure 1.

Figure 1.

The major differences between academic and industry processes that can become sources of reliability or repeatability issues in nanomedicine. The red text color indicates that process is poorly implemented, the orange means process is fairly implemented and green means process is fully implemented.

Specifically, quality control of raw materials plays a critical part in the quality of research and reliability. Raw nanomaterials for academic research are usually purchased from R&D vendors, and researchers are highly reliant on specifications provided by the vendor, where few routinely assess the analytical properties and physiochemical characteristics independent of vendor specifications. As there are no robust processes for vendor verification/auditing, it is not possible to verify the analytical process used to characterize raw nanomaterials beyond those provided by the vendors, often citing proprietary processes. Furthermore, due to the nature of academic research, in which lead researcher and source material turnover is common, frequent change of vendors is commonplace and contributes to further discrepancies in research reproducibility. In aggregate, these factors lead to inconsistency in initial raw materials used, ultimately affecting the reliability or repeatability of experiments.

Equipment validation is a major source of reliability issues in nanomedicine, i.e., the process of testing and analyzing operations to guarantee the output they produce will consistently fulfill the end user’s needs. In addition, due to the specialized nature of nanomedicine research areas, labs may need to rely on custom-built or in-house built equipment in lieu of commercially-available equipment. Custom-made equipment is difficult to standardize across labs and can further contribute to variability between labs. Validation, especially of analytical methods and equipment (Installation qualification IQ, Operation qualification OQ and Performance qualification PQ), is seldom conducted in nanomedicine experiments; and in the case of implementation, the process is often not reported in the literature nor verifiable. Unlike the standards in academia, both source material and equipment validation is required in industry, yielding high reliability/repeatability of experiments.

One final factor contributing to the repeatability issue in academia is the lack of an established oversight process aside from the peer-review process, which focuses predominately on research impact, significance, and procedure rather than validation of analytical methodology and QC. The environment in the academic system does not emphasize the detection of methodological errors or establishment of a process to prevent errors and is not inherently incentivized to explore research impact beyond proof-of-concept. Moreover, QC of whole nano-assemblies can be challenging, and researchers may simply evaluate end products rather than vetting each production step. Experiments are carried out by the experimenter or researcher at their discretion, and QC is usually limited to data oversight and manuscript editing by co-contributors, collaborators, and co-authors. This level of QC can be rigorous and robust but can also be less stringently emphasized on a manuscript-by-manuscript or group-by-group basis. In contrast, the industry has well-established systems to detect, correct, and prevent future errors in any process including experimentation with QC pipelines that apply broadly across the company or the whole industry, improving repeatability and allowing researchers to troubleshoot underperforming batches. The absence of such a system in academia can harm the reliability and repeatability of results.

Implementation of robust quality systems

Academic research largely lacks broad-reaching and multi-lab approaches for assessing data quality, analytical methodology validation, and overall research reliability and repeatability. This lack of universally established and implemented quality controls in the academic research setting may be in part what prompted the development of research standards and regulations that could be applied to academic research. Specifically for nanomaterials research as it applies to biomedical science in industry, R&D is conducted under quality control pipelines established by several international standards such as ISO 17025 (General requirements for the competence of testing and calibration laboratories), ISO 13485 (Medical devices — Quality management systems — Requirements for regulatory purposes) or the quality system regulation (QSR), good manufacturing practice (GMP), or good laboratory practice (GLP) regulations. These standards allow an organization to establish technical competence in using analytical instruments or nanomaterials testing to generate reliable and reproducible results. As part of these systems, many other standards such as nanomaterials characterization, toxicity evaluation, or use of instrumentation for specific nanomaterials may become applicable6. Implementation of these quality management systems standards as well as specific standards includes establishing a quality assurance team that is independent of the organization’s research arm to verify and document compliance with standard rules, verification of the maintenance and calibration of analytical instruments, verification of validation of analytical methods, documenting, and publishing both positive and negative results as well as developing standard operating procedures. Academic settings or laboratories, especially in small colleges or universities, seldom implement such standards or regulations. Although financial or budgetary limits contribute to such limitations, scientists and academic researchers often do not receive formal training during their education on quality- or GLP-related subjects.1011 Even if formally establishing these regulatory practices is not achieved in academia, exposing trainees to their existence and their best practices for implementation would help contextualize the importance of experimental rigor and reproducibility. While in industry such quality management system standards and similar regulations are ultimately established to protect patient health and safety, their implementation in the academic research setting is currently not mandatory, since the outputs of academic research usually do not directly affect people’s health. On the other hand, the lack of controls or robust quality systems in the academic setting can yield conflicting results, especially in the field of nanomedicine, which involves laboratories from several different disciplines.

It is worthwhile to note that even in industrial settings, most industry sectors do not follow strict GLP procedures, especially for R&D studies. This is because the infrastructure, timeline, and costs associated with GLP studies are high and mainly unaffordable for the preliminary R&D studies – most academic studies rely on final product performance and assume QC validation is not necessary if product performance is on par with expectations. Nevertheless, controls in industry R&D projects are less streamlined when GLP systems are not employed yet benefit from a broader network of standardized operating procedures than those available in single academic research groups. For instance, industry R&D benefits from the company’s established internal infrastructure and prior experiences across departments and across projects owing to those projects’ ties to similar overarching company goals. In such cases, information regarding measurement calibration, analytical validation, and established controls will be shared and upheld among different projects and company sectors which can enhance the quality and repeatability of R&D endeavors and outputs.

It is also important to understand that implementation of the current GLP or a similar oversight system to universities seems cost-impractical and too complex to implement in a research infrastructure lacking research goals naturally present in industry R&D. Quality management systems are often designed with customer satisfaction as a final goal, and span industry sectors and departments in a manner that academic research groups lack. Such standards should not slow down innovation in academia but should still establish minimum controls to basic critical process such as characterization of raw materials, use of validated analytical methods, standardized reporting guidelines, and the possibility of auditing by third parties (possibly from federal research sponsors) to verify the implementation of these rules.

Development and use of validated methods in analytical experimentation

The implementation of GLP or any quality control systems in academia may be viewed as unnecessary or resource-intensive, limiting scientific innovation. GLP or quality systems are comprised of rules and guides that avoid variable results and product quality inconsistency. However, at minimum proper employment of analytical techniques to improve repeatability and reliability of nanomaterials research can be financially beneficial and time saving, considering the resources lost to building academic innovation on studies performed with poor experimental conditions, and poor repeatability. As such, though GLP or other quality systems have clear recommendations for many aspects of performing research, perhaps the most important and easily-implemented aspect of GLP oversight is in the consolidation and use of validated analytical or testing methods to ensure that nanomaterial characterization is accurate and repeatable.

Surprisingly, the topic of analytical chemistry reproducibility and validity has not always received sufficient attention, especially in the field of nanomedicine. For example, as an important part of the analytical process control, it is critical that analytical methods or any process which cannot be verified are properly validated through laboratory studies to ensure that their performance meets the requirements for the intended applications. According to USP, the analytical characteristics of a method, including accuracy, precision, specificity, detection limit, quantitation limit, linearity, range, and robustness, need to be identified and validated to ensure the acquisition of reliable data. However, the current nanomedicine literature focuses on the formulation aspect of drug nanocarriers and drug development, while the analytical methodology for detection of drug residues or carrier characterization has received less attention or been limited to only a few methods such as high-performance liquid chromatography for pharmaceutical analysis and quantification of active pharmaceutical ingredients in the presence of nanocarriers. Standardization of validation requirements is not common practice for the majority of other important analytical techniques used in the characterization of nanomaterials or carriers such as dynamic light scattering (DLS), zeta potential, scanning electron microscopy (SEM), or transmission electron microscopy (TEM). Additionally, there are often no pre-determined sample sizes needed to measure an effect of size, such that the number of AFM or SEM images taken depends on how many are needed to observe an effect, an approach susceptible to p-hacking12. Hence, there is great risk that non-standard or uncontrolled analytical processes may compromise methodological quality and, consequently, the reliability of published research, especially in the field of nanomedicine as we discuss with more detail below. Fortunately, the nanomedicine community is making a great progress in understanding and addressing this gap.13

Use of standards in analytical or experimental methods in nanomedicine

Standards are a set of published documents that help to establish technical specifications and procedures designed to maximize the reliability of the materials, products, methods, and/or services people use every day. To expand the reproducibility and reliability of nanomedical and related characterization, different organizations have significantly contributed to the filed (Table 1.

Table 1.

Some organizational effort for standardization of nanotechnology-based products including medical and pharmaceutical products.

Standard/Guidelines Definition Organization
Guidelines related to the use of nanotechnology in devices and/or drugs Guidelines on nanotechnology products Nanotechnology Guidance Documents | FDA
Protocols provided by the Nanotechnology Characterization Laboratory (NCL) Protocols and guidelines for measuring nanomaterial properties and biological behavior Protocols - Nanotechnology Characterization Lab - NCI (cancer.gov)
ISO TC 229 Nanotechnology research standards developed or being developed for nanomaterial characterization ISO - ISO/TC 229 - Nanotechnologies
ASTM E 56 Standards in ASTM related to nanotechnology Committee E56 on Nanotechnology (astm.org)
Additional standards Various goals including promotion of commercialization of nanotechnology R&D Standards for Nanotechnology | National Nanotechnology Initiative

One of the sets of available and reliable standards for nanomedicine characterizations are provided by the International Organization for Standardization (ISO; www.iso.org). Table 1 shows some of the guidelines which support reproducibility of nanomedical characterization results. Notably, nanomaterials characterization inherently requires more orthogonal levels of characterization and validation than their bulk material counterparts, owing to the polydispersity and batch variability inherent in the generation of nanomaterial samples. For this reason, testing conditions need to be more clearly reported to make interlaboratory comparisons meaningful. Many of these standards are already established and used in materials science specially for bulk materials characterization. For example, for a simple tensile test of thin plastic sheet, several test parameters such as sample shape and size, crosshead speed, conditioning and testing temperature and humidity should be performed and reported according to specific guidelines mentioned in ASTM D882 or its equivalent in ISO (ISO 527–3, Plastics — Determination of tensile properties — Part 3: Test conditions for films and sheets) to guarantee the repeatability and accuracy of the results.

Similar efforts to standardize nanomaterials used in nanomedical tests have also been ongoing. For example, minimum reporting requirements mentioned for DLS in ISO 22412:2017 (Particle size analysis — Dynamic light scattering (DLS)) includes: particle concentration (mass or volume based), dispersion medium composition, refractive index values for the particles and the dispersion medium, viscosity value for the medium, measurement temperature, filtration or other procedure used to remove extraneous particulates/dust prior to analysis (including pore size and filter type), cuvette type and size (pathlength), instrument make and model, scattering angle(s), and laser wavelength. Very few researchers report all these pieces of information in the literature, which negatively influences the reliability and comparison of these results14. The ongoing effort on standardization of nanomaterial characterization and toxicity is currently being undertaken in national and international organizations such as ISO or ASTM. For example, ISO - ISO/TC 229 – Nanotechnologies has already published 100 ISO standards related to the nanotechnology. The details and scope of these standards are beyond the scope of this manuscript, and are reported elsewhere6.

Although these standards and guidelines are helpful, it is worth noting that nanomedicine is a very broad field. Each class of nanomaterials and each category of drug products (proteins, nucleic acids, small molecule drugs) delivered by nanoparticles, as well as drug vs. device fields, have their own unique set of distinctions and requirements. More established materials that are already in the clinic have FDA guidelines and ASTM/ISO standards, whereas other, newer materials, such as many metal or polymeric nanoparticles are not in the clinic and do not have FDA, ASTM, ISO recommendations yet. In this case, the regulation of nanomaterials, specially nanomaterials used in medical devices or as nano-pharmaceuticals, is still a challenge; there are some promising advancements in this regard including the establishment of toxicity standards such as ISO/TR 10993–22:2017 (Biological evaluation of medical devices — Part 22: Guidance on nanomaterials)15. In general, it is important that researchers gain awareness of these standards and guidelines, ideally during their training, and adopt these regulations into their existing research endeavors.

Review process

Another issue contributing to reliability/repeatability is the review process. Academic review is literature-based, and the focus is on tools, techniques, findings, and impact rather than details of the methodology used and factors affecting the measurement precision. Academic peer-review may also come from a collection of scientists who may not be experts in the manuscript’s field and may not know to recommend validation tools needed to support author claims. In academia, third party review is rare and in cases when it is implemented, such as in multi-principal investigator settings or multi-institute government-funded centers, review is seldom onsite and is limited in scope, and with review criteria that rarely focus on experimental methodology. In contrast, review in industry is carried out by independent or governmental organizations, which have trained teams to review the process in much more detail, both onsite and offsite, against known standards and regulations applied similarly across independent industry research organizations.

There are significant differences in the review process between industry and academia. All processes in industry, ranging from incoming inspection, method validation, manufacturing steps, training and qualification of personnel, calibration, cleaning, documentation, purchasing or raw materials and active pharmaceutical ingredients (APIs), etc. are subject to frequent vigorous review both internally by companies’ experts and externally by regulatory agencies. The review process incudes both physical inspection of the venue and reviewing documents as well as review of documents offsite by expert teams from regulatory agencies. For each subject, review and auditing criteria are clearly established as either international standards or regulations. In reviewing any one product, several teams and a variety of experts are involved, and the process can take from several days to several months. This process ensures maximum repeatability. In comparison, the review process in an academic paper is much simpler. Most journals do not require any degree of evidence regarding the experimental conditions, method validation, analytical instruments used and their calibration, or the training and qualifications of people involved. The review process is limited to a few referees who review the paper with minimal standardized review criteria and often without seeing the experimental lab space, the raw data, and analysis methods or code. One may also acknowledge challenges of academic settings in securing a fair and unbiased peer review process, especially for fields that are inherently interdisciplinary.16 Reviewing experimental conditions, method validation, and analytical instrumentation is often left to the discretion of the reviewers and editors who do not usually request additional information on the experimental conditions, methodologies, and/or instrumentation.

Recommendations to improve the academic nanoscience literature

The examples presented above, along with many other efforts reported in the literature to improve the reliability and repeatability in life science research and especially in nanomedicine, offer only a partial solution to reproducibility of nanotechnology literature1719. Frequent recommendations include the use of authenticated raw materials, thorough description of methods, sharing raw data, training on design of experiments and statistical analyses, publishing all data including negative results, and proper documentation of the study procedures and results accompanied by publication of electronic notebooks in open notebook formats. Some other recommendations also include efficient training for lab members alongside experts in specific characterization techniques and standardization of methodological approaches to ensure the robustness of the outcomes.20,78

Implementation of specific types of quality management systems or GLP adapted to the requirements of academic settings is an essential step to improve scientific repeatability in nanomedicine. These systems are based on the PDCA (plan–do–check–act) cycle, which is an iterative design and management method used to control and continuously improve processes and products. Without proper implementation of the quality system or GLP, PDCA is compromised, and the vicious cycle of flawed reliability/repeatability continues. Moreover, the devil is in the details of proper implementation of these rules. Training programs for researchers, especially in the interdisciplinary field of nanomedicine, usually do not cover GLP or quality control systems in science. There are no guidelines or mandates for establishment of these rules in the lab or for publication of academic research.

Although the reliability/repeatability issue in nanomedicine could be addressed to a great extent by implementation of quality systems or GLP in the lab, that could be expensive and, in many cases, not feasible. However, the part of GLP or quality system which requires validation of analytical methods or non-verifiable is still applicable and should be fully implemented in academic settings. The implementation of these processes should be verified and audited by third party specialists in the field and accredited organizations, or by sponsors of academic research akin to standardized guidelines now well established for animal use protocols. Journals could also ask for certification of validation as well as evidence of GLP implementation especially for critical processes such as analytical characterization or method validation in nanoscience. Many journals have begun standardizing both data collection, analysis, and reporting methodologies in the biological sciences to increase rigor and reproducibility. Support from academia is required for these efforts to take hold, and some academic institutes have built nanomaterials-specific characterization centers with full-time staff who are experts in nanoparticle characterization techniques, thereby minimizing user-based errors.

To establish standards and validate analytical techniques in academia and industry, a diverse range of stakeholders (including researchers, universities, companies, funding agencies, and journals) should continue the conversation on best practices in nanomedicine20. Collaboration between academic and industry researchers can also ensure implementation of standards and GLP that maximize repeatability in industry are incorporated into academia. Successful startups in the nanomedicine space could provide hints for successful translation of tools from industry to academia, as these companies have performed R&D to an industry standard without introducing cost-prohibitive obstacles or relying on well-established infrastructure and know-how.

Academia can also find the need for development of new standards that can have a profound role in improving safety and diagnostic/therapeutic efficacy of nanomedicine technologies. For example, very recent finding on evaluation of protein corona -the layer of proteins that forms on the surface of nanomedicines upon their interactions with biological fluids- revealed that mass spectroscopy variations (in protocols and instruments) can cause unexpected variations in the outcomes of protein corona.21 More specifically, by sending identical nanoparticle protein corona to 17 different mass spectroscopy centers and analysis of the results, only 1.8% of identified unique proteins were shared across the centers.21 The outcomes of this study suggest the urgent need for development of new standard protocols for protein corona analysis.

As rapid turnover is another prominent issue in academic settings, groups with the appropriate resources can hire staff with backgrounds in industry to consult during material acquisition and experimental design, and draft experts in analytical techniques to train new lab members. Repeatability may also be compromised by the pressure to publish frequently and report only data that fits a status-quo narrative. Universities can support reproducibility by teaching basic GLP and experimental design standards to students in nanomedicine-related fields and by explicitly considering data integrity into the career advancement, promotion, and tenure process. Finally, funding agencies and journals can and have already begun to require information on reproducibility, sample size calculations, statistical analyses, transparent reporting, and repeatability, and should continue to reward transparency in the research and publication process22.

Currently there are several qualities systems in use for production of medicinal or medical devices for human use (Table 2). For example, the FDA evaluates the implementation of QSR or GMP via on-site as well as off-site assessment. Having identified several challenges and gaps related to physicochemical characterization, biocompatibility, and toxicity evaluation of nanoparticles incorporated into medical devices, the FDA launched the Nanotechnology Program in its Center for Devices and Radiological Health (CDRH). The program focuses on regulatory research in evaluating the physicochemical properties and toxicity of nanomaterials utilized in medical devices, and the impact of the manufacturing processes on these properties23. It is worth noting that the toxicity of nanomaterials is a contentious area, with differing opinions even contributing to disparate regulatory outcomes: i.e. regulation over TiO2 in foods and cosmetics in the USA versus EU24. Similarly, EU regulations such as 2017/745 specifically recommend highly stringent conformity assessment procedures for medical devices containing nanomaterials, especially when high internal exposure is relevant. Physical assessment and implementation of GMP or relevant quality standards are always prerequisites to any product certifications.

Table 2.

Example of some of the quality management system established in the industry to control the quality, reproducibility and safety of the medical product used for human

Program Definition Examples Standard in academic setting Ref
GLP A quality system to control reliability, uniformity, reproducibility, quality, and integrity of the processes related to products in development for human or animal health US: 21CFR58
EU: Directive 2004/10/EC
EU: Directive 2004/9/EC
No 26
GMP Quality assurance and testing guidelines for the manufacture of products related to the health and safety of human to ensure batch to batch consistently, quality, and safety National regulatory agencies No 27
ISO 17025 The main ISO standard used by testing and calibration laboratories to ensure valid results Several, multidisciplinary accreditation bodies are involved depends on the countries No 28
QSR or ISO 13485 The quality system mandate for all medical device manufacturers in the USA or EU to ensure safety and performance US: 21 CFR Part 820
EU: ISO 13485
No 2930

Many lessons can be learnt from these standards and regulations which aim to control reliability, uniformity, reproducibility, quality, and integrity and validity of the test results. As discussed above, a set of unique standards designed to academic research endeavors, scale, and needs can be a great advance in the field. Supporting industry-academia research collaborations in nanomedicine could similarly help informally support the sharing of best practices. In summary, the key lessons learned from improving reproducibility and repeatability of nanomedicine products in industrial settings could be transferred to the academic environment to maximize repeatability of the experimental findings, but hinge on buy-in from academic researchers. Though the current system of nanoparticle-based QC and validation applied into industry might not directly suit the needs and scale of academic milieus, standards built into industry QC pipelines can provide a good template for building similar validation streams into academia. It is worth mentioning that there have been activities to establish communication between regulatory bodies, academics, industry, and other organizations to address the above-mentioned points. For example, the Joint Research Centre (JRC) of The European Commission hosted a workshop to connect various stakeholders in the nanomedicine community with the aim of addressing methodological gaps. Such efforts may provide a fertile ground for development of innovative and optimized methods that can increase reliability and reproducibility in nanomedicine25. Doing so will provide more broadly-applicable and universally validated datasets in nanoscience, enabling more fruitful interdisciplinary endeavors in the use of nanotechnology in clinical practice.

Acknowledgements

MM gratefully acknowledge financial support from the U.S. National Institute of Diabetes and Digestive and Kidney Diseases (grant DK131417). We acknowledge support of a Burroughs Wellcome Fund Career Award at the Scientific Interface (CASI) (MPL), a Dreyfus foundation award (MPL), the Philomathia foundation (MPL), an NIH MIRA award R35GM128922 (MPL), an NIH R21 NIDA award 1R03DA052810 (MPL), an NSF CAREER award 2046159 (MPL), an NSF CBET award 1733575 (to MPL), a CZI imaging award (MPL), a Sloan Foundation Award (MPL), a USDA BBT EAGER award (MPL), a Moore Foundation Award (MPL), and a DOE office of Science grant DE-SC0020366 (MPL). MPL is a Chan Zuckerberg Biohub investigator, a Hellen Wills Neuroscience Institute Investigator, and an IGI Investigator.

Competing interests

Morteza Mahmoudi discloses that (i) he is a co-founder and director of the Academic Parity Movement (www.paritymovement.org), a non-profit organization dedicated to addressing academic discrimination, violence and incivility; (ii) he has a conflict of interest with Partners in Global Wound Care (PGWC), Targets’ Tip Inc., and NanoServ Inc.; and (iii) he receives royalties/honoraria for his published books, plenary lectures, and licensed patent.

References

  • 1.Brembs B, Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in Human Neuroscience 2018, 12 (37). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Tinkle S; McNeil SE; Mühlebach S; Bawa R; Borchard G; Barenholz YC; Tamarkin L; Desai N, Nanomedicines: addressing the scientific and regulatory gap. Annals of the New York Academy of Sciences 2014, 1313, 35–56. [DOI] [PubMed] [Google Scholar]
  • 3.Plesser HE, Reproducibility vs. Replicability: A Brief History of a Confused Terminology. Frontiers in neuroinformatics 2017, 11, 76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Richardson JJ; Caruso F, Nanomedicine toward 2040. ACS Publications: 2020; Vol. 20, pp 1481–1482. [DOI] [PubMed] [Google Scholar]
  • 5.Faria M; Björnmalm M; Thurecht KJ; Kent SJ; Parton RG; Kavallaris M; Johnston APR; Gooding JJ; Corrie SR; Boyd BJ; Thordarson P; Whittaker AK; Stevens MM; Prestidge CA; Porter CJH; Parak WJ; Davis TP; Crampin EJ; Caruso F, Minimum information reporting in bio-nano experimental literature. Nature nanotechnology 2018, 13 (9), 777–785. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Sharifi S; Mahmoud NN; Voke E; Landry MP; Mahmoudi M, Importance of Standardizing Analytical Characterization Methodology for Improved Reliability of the Nanomedicine Literature. Nano-Micro Letters 2022, 14 (1), 172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Leong HS; Butler KS; Brinker CJ; Azzawi M; Conlan S; Dufés C; Owen A; Rannard S; Scott C; Chen C; Dobrovolskaia MA; Kozlov SV; Prina-Mello A; Schmid R; Wick P; Caputo F; Boisseau P; Crist RM; McNeil SE; Fadeel B; Tran L; Hansen SF; Hartmann NB; Clausen LPW; Skjolding LM; Baun A; Ågerstrand M; Gu Z; Lamprou DA; Hoskins C; Huang L; Song W; Cao H; Liu X; Jandt KD; Jiang W; Kim BYS; Wheeler KE; Chetwynd AJ; Lynch I; Moghimi SM; Nel A; Xia T; Weiss PS; Sarmento B; das Neves J; Santos HA; Santos L; Mitragotri S; Little S; Peer D; Amiji MM; Alonso MJ; Petri-Fink A; Balog S; Lee A; Drasler B; Rothen-Rutishauser B; Wilhelm S; Acar H; Harrison RG; Mao C; Mukherjee P; Ramesh R; McNally LR; Busatto S; Wolfram J; Bergese P; Ferrari M; Fang RH; Zhang L; Zheng J; Peng C; Du B; Yu M; Charron DM; Zheng G; Pastore C, On the issue of transparency and reproducibility in nanomedicine. Nature Nanotechnology 2019, 14 (7), 629–635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Florindo HF; Madi A; Satchi-Fainaro R, Challenges in the implementation of MIRIBEL criteria on nanobiomed manuscripts. Nature Nanotechnology 2019, 14 (7), 627–628. [DOI] [PubMed] [Google Scholar]
  • 9.Jones AD 3rd; Mi G; Webster TJ, A Status Report on FDA Approval of Medical Devices Containing Nanostructured Materials. Trends in biotechnology 2019, 37 (2), 117–120. [DOI] [PubMed] [Google Scholar]
  • 10.Stevens AM; Smith AC; Marbach-Ad G; Balcom SA; Buchner J; Daniel SL; DeStefano JJ; El-Sayed NM; Frauwirth K; Lee VT; McIver KS; Melville SB; Mosser DM; Popham DL; Scharf BE; Schubot FD; Seyler RW Jr.; Shields PA; Song W; Stein DC; Stewart RC; Thompson KV; Yang Z; Yarwood SA, Using a Concept Inventory to Reveal Student Thinking Associated with Common Misconceptions about Antibiotic Resistance. J Microbiol Biol Educ 2017, 18 (1), 18.1.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Krull IS; Swartz M, Analytical Method Development and Validation for the Academic Researcher. Analytical Letters 1999, 32 (6), 1067–1080. [Google Scholar]
  • 12.Head ML; Holman L; Lanfear R; Kahn AT; Jennions MD, The extent and consequences of p-hacking in science. PLoS biology 2015, 13 (3), e1002106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Halamoda-Kenzaoui B; Vandebriel RJ; Howarth A; Siccardi M; David CAW; Liptrott NJ; Santin M; Borgos SE; Bremer-Hoffmann S; Caputo F, Methodological needs in the quality and safety characterisation of nanotechnology-based health products: Priorities for method development and standardisation. Journal of controlled release : official journal of the Controlled Release Society 2021, 336, 192–206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hackley V; Clogston J, NIST - NCL Joint Assay Protocol, PCC-1. 2020.
  • 15.Foulkes R; Man E; Thind J; Yeung S; Joy A; Hoskins C, The regulation of nanomaterials and nanomedicines for clinical application: current and future perspectives. Biomaterials Science 2020, 8 (17), 4653–4664. [DOI] [PubMed] [Google Scholar]
  • 16.Mahmoudi M, A Healthier Peer Review Process Would Improve Diversity. ACS Applied Materials & Interfaces 2020, 12 (37), 40987–40989. [DOI] [PubMed] [Google Scholar]
  • 17.Bhattacharjee S, Nanomedicine literature: the vicious cycle of reproducing the irreproducible. International Journal of Pharmacokinetics 2017, 2 (1), 15–19. [Google Scholar]
  • 18.Luxenhofer R, Polymers and nanomedicine: considerations on variability and reproducibility when combining complex systems. Nanomedicine 2015, 10 (20), 3109–3119. [DOI] [PubMed] [Google Scholar]
  • 19.McClain SM; Ojoawo AM; Lin W; Rienstra CM; Murphy CJ, Interaction of alpha-synuclein and its mutants with rigid lipid vesicle mimics of varying surface curvature. ACS nano 2020, 14 (8), 10153–10167. [DOI] [PubMed] [Google Scholar]
  • 20.Mahmoudi M, The need for robust characterization of nanomaterials for nanomedicine applications. Nature Communications 2021, 12, 5246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ashkarran AA; Gharibi H; Voke E; Landry MP; Saei AA; Mahmoudi M, Measurements of heterogeneity in proteomics analysis of the nanoparticle protein corona across core facilities. Nature Communications 2022, 13 (1), 6610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Jasny BR; Wigginton N; McNutt M; Bubela T; Buck S; Cook-Deegan R; Gardner T; Hanson B; Hustad C; Kiermer V; Lazer D; Lupia A; Manrai A; McConnell L; Noonan K; Phimister E; Simon B; Strandburg K; Summers Z; Watts D, Fostering reproducibility in industry-academia research. Science (New York, N.Y.) 2017, 357 (6353), 759–761. [DOI] [PubMed] [Google Scholar]
  • 23.Paradise J, Regulating Nanomedicine at the Food and Drug Administration. AMA journal of ethics 2019, 21 (4), E347–355. [DOI] [PubMed] [Google Scholar]
  • 24.Boutillier S; Fourmentin S; Laperche B, History of titanium dioxide regulation as a food additive: a review. Environmental Chemistry Letters 2022, 20 (2), 1017–1033. [Google Scholar]
  • 25.Halamoda-Kenzaoui B; Baconnier S; Bastogne T; Bazile D; Boisseau P; Borchard G; Borgos SE; Calzolai L; Cederbrant K; Di Felice G; Di Francesco T; Dobrovolskaia MA; Gaspar R; Gracia B; Hackley VA; Leyens L; Liptrott N; Park M; Patri A; Roebben G; Roesslein M; Thürmer R; Urbán P; Zuang V; Bremer-Hoffmann S, Bridging communities in the field of nanomedicine. Regulatory Toxicology and Pharmacology 2019, 106, 187–196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Jena GB; Chavan S, Implementation of Good Laboratory Practices (GLP) in basic scientific research: Translating the concept beyond regulatory compliance. Regulatory Toxicology and Pharmacology 2017, 89, 20–25. [DOI] [PubMed] [Google Scholar]
  • 27.Gouveia BG; Rijo P; Gonçalo TS; Reis CP, Good manufacturing practices for medicinal products for human use. Journal of pharmacy & bioallied sciences 2015, 7 (2), 87–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Honsa JD; McIntyre DA, ISO 17025: Practical Benefits of Implementing a Quality System. Journal of AOAC INTERNATIONAL 2019, 86 (5), 1038–1044. [PubMed] [Google Scholar]
  • 29.Abuhav I, ISO 13485:2016 A Complete Guide to Quality Management in the Medical Device Industry. Second Edition ed.; CRC Press: Boca Raton, 2018. [Google Scholar]
  • 30.Lincoln JE, Overview of the US FDA GMPs: Good Manufacturing Practice (GMP)/Quality System (QS) Regulation (21 CFR Part 820). Journal of Validation Technology 2012, 18 (3), 17–22. [Google Scholar]

RESOURCES