Abstract
Validation of analytical methods to assess figures of merit and other key performance parameters is a fundamental requirement within the fitness-for-purpose concept. By combining generative AI and subject matter review, this perspective article provides insights into analytical trends, technological advancements, and the current state of analytical reporting with respect to validation of published pesticide residue methods involving mass spectrometry in agricultural applications. Reporting trends of analytical parameters and technological advancements were evaluated across a data set of 391 studies published in the Journal of Agricultural and Food Chemistry from 1970 to 2024. This feasibility study demonstrated that with properly optimized prompts and performance verification, AI can efficiently and accurately evaluate scientific literature.
Keywords: pesticide residue, analytical method validation, artificial intelligence, scientific literature evaluation, AI in analytical chemistry, AI assisted data extraction, AI-driven literature review

Introduction
Validation of an analytical method provides evidence that when correctly applied, it produces results that are fit for purpose, thereby demonstrating the effectiveness of the method with an acceptable degree of certainty. Method validation is important for pesticide residue analysis as it supports the reliability and accuracy of results, allowing for the effective monitoring of pesticide residues. , This validation process helps to safeguard public health and supports compliance with safety standards, thereby promoting consumer confidence in food safety. ,
The importance of a well-designed and properly executed analytical method validation is highlighted by the availability of numerous practical guides in the peer-reviewed literature. − Additionally, there are guidelines from international nongovernmental organizations including AOAC International , and IUPAC which provide valuable information to support pesticide residue analysis. A common theme in essentially all method validation guidelines is that the key criteria for evaluating the performance of analytical methods include acceptable selectivity, trueness, precision, linearity, range, and limit of detection/quantification (LOD/LOQ).
Many pesticide residue methods are validated in a regulated environment, such as good laboratory practices guidelines, to support risk assessments and post approval control or monitoring studies. Beyond the best practices outlined in scientific literature or by nongovernmental organizations, regulatory guidelines provide scientists with a clear framework for demonstrating analytical method performance. Overall, these guidelines align with the best practices suggested in the literature but are more prescriptive regarding additional analytical performance parameters. These include evaluation of matrix effects, method robustness, interlaboratory testing, and storage stability. Specific procedures, such as quality control (QC) requirements, the number of replicates, criteria for acceptable results, and representative analytes and matrices (or matrix groups) are also described. Methods validated for these purposes typically comply with one or more guidelines from regulatory agencies or policy organizations including the Environmental Protection Agency, European Union Comission, , and Organization for Economic Co-operation and Development.
Although the significance of proper method validation to demonstrate the quality of the results is widely acknowledged, authors of peer-reviewed scientific publications are not obligated to adhere to established method validation procedures or to demonstrate the validity of their results unless explicitly requested by reviewers or editors. However, validated methods in analytical research are central for ensuring the quality and reliability of results, as proper method validation demonstrates that findings are accurate, reproducible, and trustworthy. This knowledge facilitates comparisons between studies, allowing researchers to assess the consistency of results across different investigations, which is vital for building a cohesive body of scientific knowledge. As technological advancements and regulatory requirements evolve, a solid understanding of current validation standards enables researchers to adapt their approaches and improve the quality of their work. Additionally, being well-versed in validated methods allows researchers to streamline their processes, saving time and resources while avoiding unnecessary pitfalls. Ultimately, this understanding empowers researchers to critically evaluate published reports, ensuring they make informed decisions based on the most reliable and relevant findings, thereby contributing to the integrity and advancement of the scientific field.
The goal of this work was to create an optimized Artificial Intelligencei (AI)-driven data extraction workflow, validated by subject matter experts, to uncover trends in analytical method validation for pesticide residue studies utilizing mass spectrometry (MS). Specifically, the reporting of analytical performance parameters and the evolution of technology trends was assessed from manuscripts published in the Journal of Agricultural and Food Chemistry from 1970 to 2024. This systematic evaluation of publications aims to contribute to improved analytical reporting practices in pesticide residue literature and to demonstrate the strengths and limitations of AI tools in this application.
AI in Analytical Chemistry
As highlighted in recent literature, AI can assist in data analysis in analytical chemistry by aiding in interpreting complex data sets, optimizing analytical methods, enhancing compound identification and quantification, and predicting properties to improve discovery. − The current use of AI in analytical chemistry has been facilitated by a range of tools, including machine learning (ML), deep learning (DL), and artificial neural networks (ANNs). By integrating AI algorithms, researchers can effectively analyze large data sets generated from MS techniques, along with gas and liquid chromatography (GC-MS and LC-MS). This integration enables the identification and quantification of pesticide residues for various applications. Sample preparation optimization procedures have been modernized using ML for simplified optimization of solid-phase extraction procedures and ANNs for QuEChERS extraction. DL has also been used to predict retention times in chromatography for method optimization. Similarly, LC quantitative structure–retention relationship models, based on molecular descriptors, have used deep learning to correlate molecular structures with retention times, aiding in method development and compound identification.. Looking to the future, data extraction is another important function that may be added to the expanding AI toolbox.
Database Creation and Use of AI
A reference database of analytical performance metrics from the scientific literature was created by conducting a literature search, refining AI prompts, extracting relevant data using AI tools, and ensuring the integrity of the output through a QC evaluation. Supporting Information provides a more detailed description of the study process. In brief, a topic-based Web of Science (WoS) search of J. Agric. Food Chem. was used to identify papers involving MS for pesticide residue analysis. This search identified papers in the target group of MS-based pesticide residue, but it also included papers out of the target group, on topics such as veterinary drugs and metabolism. Therefore, the data set was curated through a combination of AI categorization and review by the authors to ensure accurate classification of the papers. This resulted in a final data set of 391 relevant studies that were defined as “Pesticide Residue by MS”, (details of search and curation appear in Supporting). The AI-assisted curation process estimated the rate of false positives (i.e., off-topic papers included in the WoS search), but did not allow for assessment of the false negative rate (i.e., relevant papers that were not identified by WoS).
Data extraction from the scientific literature was conducted using AI tools based on the Chat GPT-4 architecture. The GPT-4 model was deployed through a proprietary application hosted on the Amazon Web Services cloud computing platform (Seattle, WA). Accurate AI results were achieved with optimized AI prompts. As shown in Table S1, a total of 20 prompts were created for data extraction. These prompts were tested and refined using a subset of 25 papers, which involved fact-checking and iterative inquiry through sequential questioning.
In the current work, 19 iterative rounds of prompt refinement were performed. The first five rounds entailed a human learning and scoping exercise where prompts were generated and refined based on output of the AI tool. This phase included a set of prompts used as internal controls to identify obvious false positives and negatives, such as who won an actual or nonexistent Nobel Prize. Subsequent rounds involved further refinement of prompts and a QC step consisting of manual assessment to determine the accuracy of AI output. Results were compared to previous prompt performance and accuracy targets. The last phase optimized the order of prompts. This systematic approach highlights the strategic nature and time investment needed to achieve high performance in AI.
AI Performance Verification
The accuracy of AI was statistically estimated using a 95% confidence level ± 3.5% margin of error by manually reviewing 96 papers out of the precurated data set, as described in Supporting Information. Discussions among the authors were held in advance to set shared standards, and then each paper was manually reviewed by two authors who responded to the same prompts posed to the AI. The reviewer responses were checked for concordance; and if they agreed, their results were compared to the AI’s outputs. In cases of disagreement, the third author conducted an independent review followed by a group discussion to reach a consensus opinion. The human-derived answers were then compared to the AI outputs to assess the AI tool’s accuracy Table . Using AI, 19 out of 20 prompts exceeded 90% accuracy, ranging from 92 to 100%, while only the prompt on LOQ fell below this target with an accuracy of 79% due to false positives. In the initial testing of this prompt involving 25 papers, AI met the target accuracy, but the authors believe that a larger training data set would not ultimately have made a difference in accurately assessing reporting of LOQs due to the complex nature of this figure of merit and lack of common standards in the analytical field.
1. Evaluation of AI Tool Accuracy through Quality Control and Expert Review .
Accuracy assessment of AI outputs compared to human error rates for each prompt, based on a quality control procedure was implemented, involving manual review of 96 papers by two authors to achieve a 95% confidence level with a 3.5% margin of error. The reviewers’ concordant responses were compared to the AI outputs, with a third author resolving disagreements. 19 out of 20 prompts exceeded the targeted 90% accuracy, with one prompt at 79% due to false positives. The results indicate that AI performance is comparable to human reviewers, with human inaccuracies primarily due to typographical errors, missing text, and differing scientific opinions. The findings suggest that prompt content influences performance variability for both authors reviews and AI classification.
Human accuracy was calculated based on the concordance rate between the two authors and was compared to the accuracy achieved by AI per prompt. For 12 prompts, human accuracy matched that of the AI, both falling within the 3.5% margin of error, indicating equivalent performance. For the remaining 8 prompts, results were evenly distributed, with both humans and AI outperforming each other on four prompts. This nearly human level accuracy was accomplished far more efficiently as each reviewing author required about 20 min per paper to search for the desired information, whereas the AI result was generated nearly instantly, which saved the equivalent of ≈130 h uninterrupted human effort in this test case of 391 publications.
The reasons for human inaccuracies fell into three main categories (1) data entry typographical errors (2) overlooking data in manuscripts, and (3) differing scientific opinions. For example, in scenario 1 the authors, working in an Excel spreadsheet, either inadvertently recorded the wrong response into a given cell or recorded their response into an incorrect cell. In scenario 2, an author may have missed data in a manuscript that was contained in a figure caption or other reduced visibility area whereas their counterpart located this information. In scenario 3, misalignment between authors in the interpretation of a prompt regarding scope occured due to prompt ambiguity. For example, does the prompt on stability only refer to in the lab (e.g., matrix or solution storage stability) or does it also include field studies (e.g., soil dissipation). Ultimately, the team concluded that only lab stability fell within scope. In comparison, AI does not make errors in scenario 1, but in scenario 2 it can certainly overlook information based on misread or misinterpreted information, largely due to lack of clarity by the publication authors. Experts can understand assumptions or implied information in experiments while AI had no experience or knowledge in analytical chemistry to help make decisions. AI is also susceptible to scenario 3 if the intent of the prompt is not clear.
Interestingly, a trend emerged in which prompts with lower author accuracy correlated with decreased AI accuracy, suggesting that prompt content influences performance and interpretation variability for both reviewing authors and AI classification. Based on this experience, the authors consider that while AI clearly has the potential to significantly enhance data extraction efficiency, careful consideration of prompt design along with evaluation of outputs by subject matter experts is essential to mitigate potential pitfalls and maximize effectiveness. As AI becomes more prominent in reviewing the literature, developers in the future may not take as much effort to devise and validate prompts as carefully as we did in this study. Publication authors, reviewers, and editors should seek to improve clarity for better interpretation of their work by both AI and humans.
Historical Trends in Pesticide Residue Analysis
The analysis of pesticide residues has significantly evolved over time due to key innovations in MS technologies. In 2018, Lehotay and Chen conducted an extensive literature review and reported trends of many applications and technologies involved in food safety analysis. This review showed that from 1994 to 2017, the number of publications involving MS technologies grew from less than 100 to greater than 1000 per year within the food analysis search parameters. Supporting Information provides an update of that critical review pertaining to the MS and validation topic of this perspectives article. In summary, reports of food contaminant analysis have grown exponentially since 1990 to as many as 6000 per year, of which 10 to 31% of those publications involved validation and MS technologies, respectively, with up to 22% of the latter also entailing validation (see Figure S2).
In this work, historical trends were examined from 1990 to 2024 (1970′s and 1980s were excluded due to insufficient studies within the time period), encompassing 374 MS-based pesticide residue studies in J. Agric. Food Chem. A closer look at the chromatography and ionization methods reported reveals a significant technological shift over the decades (see Figure and details in Figure S3A,B). In the 1990s, the predominant technique was sole use of GC (56%) and election ionization/chemical ionization EI/CI (73%), which declined to 7% for GC and 7% for EI/CI in the 2020–2024 time period. Conversely, in the 1990′s there was modest use of standalone (ultra)high-performance liquid chromatography (HPLC/UHPLC) (23%) and electrospray ionization /atmospheric-pressure chemical ionization(ESI/APCI) (15%), but the use has expanded to 89% for HPLC/UHPLC and 74% for ESI/APCI in recent years. Part of this expansion can be accounted for by the 2004 introduction of UHPLC, with rapid adoption, leading to 22% of studies utilizing it by the 2020–2024 period (see Figure S3A). A combined use of both GC and HPLC/UHPLC in the same study has held relatively steady over time, representing 20–30% of publications, but previous work demonstrated that MS-based detection increased greatly as a percentage of chromatography.
1.

Historical reporting trends in GC and its EI/CI (blue lines), HPLC/UHPLC and its companion ionization techniques ESI/APCI (green lines) and the combined use of GC and HPLC/UHPLC and their companion ionization techniques EI/CI and ESI/APCI (orange lines). Chromatography methods are solid lines while ionization methods are denoted in dashed lines. A steady decline in use of standalone GC and EI/CI (blue lines) is replaced by a predominance of sole use of HPLC/UHPLC and ESI/APCI (purple lines), however the combined use of both GC and HPLC/UHPLC in the same study (tan lines) has held steady over time.
This contrasts with the adoption of supercritical fluid chromatography (SFC)-MS. Although modern SFC systems were introduced in 2012, providing nearly solventless systems as environmentally friendly alternatives to LC with superior separation of chiral compounds, SFC usage has remained relatively low at ≤7% (see Figure S3A). The rapid adoption of UHPLC may be attributed to its versatility, similarity in operation to HPLC, and broad applicability across various types of analyses.
The early 2000s witnessed the launch of benchtop high-resolution (HR) MS instruments, which broadened access to this advanced technology. The use of high resolution mass spection (HRMS) has steadily increased, culminating in a peak when 35% of publications from 2015 to 2019 utilized this technology (see Figure S3C).
Method Validation Trends in Pesticide Residue Analysis
Analytical methods play a pivotal role in pesticide residue analysis, serving as the backbone for data generation in both application-based and method validation focused studies. While method validation focused papers are typically expected to provide detailed analytical performance parameters, application-centered studiessuch as those involving field, greenhouse, or market basket analysesmay not consistently report analytical parameters. However, such studies often reference prior work to support the credibility of their findings. Papers focused on validation consistently list key analytical performance parameterssuch as selectivity, accuracy, precision, linearity, range, and LODs/LOQsmore frequently than nonvalidation focused studies. Despite a similar need for analytical support in studies applying analytical methods, this application of AI found that many authors do not report performance metrics. In contrast, nonvalidation focused articles typically use analytical methods as a supporting component, leading to less emphasis on the critical evaluation of analytical methods. This trend underscores the need for greater diligence and standardized practices in reporting analytical parameters across pesticide residue literature to enhance the reliability and reproducibility of findings.
The collection of 391 papers was analyzed for trends in overall reporting rates of 13 analytical parameters (see Supporting Information), as well as differences in reporting trends between method validation focused and nonmethod-validation papers, as classified by the AI output (see Table S2). Overall, 10 of the 13 analytical performance parameters were reported more often in validation focused papers than in nonvalidation focused papers, based on chi-squared 2 × 2 test (p < 0.05). The three parameters that did not have a statistical difference were stability, false positive, and false negative. In addition, the results show that the parameters that are broadly recognized as essential to method validation (e.g., accuracy, precision, LOQ, LOD, and linear range) are more frequently reported than the remaining parameters (e.g., stability, matrix effects).
Further investigation into the reporting trends of analytical performance parameters was conducted by examining the prevalence of studies that evaluated matrix effects and strategies to manage them (i.e., matrix-matched standards or internal/surrogate standards) in both validation focused and nonvalidation focused papers (see Table S2). Method validation papers significantly reported evaluation of matrix effects (52 vs 20%, p < 0.05) and use of matrix-matched standards (62 vs 36%, p < 0.05) more often than nonvalidation papers. In contrast, the use of internal or surrogate standards was reported similarly in both categories (44 vs 45%). The increased use of matrix-matched standards in method validation studies, particularly those that formally assess matrix effects, is consistent with their importance in the process of evaluating matrix effects. Manual review revealed that many papers incorporated matrix-matched standards during method development, though their application in final validation was not always necessary due to successful compensation for matrix effects through characterization or isotopically labeled internal standards.
The contrast between the reporting of analytical performance parameters between validation focused and nonvalidation focused papers was further explored by the reporting rate of a core set of analytical parameters for sensitivity (either LOD or LOQ), selectivity (either by diagnostic ion or HRMS), calibration (linearity and linear range) and recovery (accuracy and precision), as detailed in Figure and Table S3. In validation focused papers, 71% reported all 6 criteria, while 85% reported five or more parameters, with none failing to report any of the parameters. In contrast, nonvalidation focused studies showed that 30% of papers reported all 6 parameters, 46% reported five or more, and 3% reported none of the parameters. The increased reporting rate in validation focused papers was statistically significant at all levels tested by the chi-squared test (3, 4, 5, or 6 criteria; p < 0.05).
2.

Core set of analytical performance metrics, typically used to assess an analytical method’s sensitivity, calibration, selectivity, and recovery, were examined for their frequency of reporting together within a single manuscript. The six analytical figures of merit include: (1) sensitivity by either LOD or LOQ, (2) selectivity using either HRMS or a diagnostic ion, (3) linearity of the calibration curve, (4) range of the calibration curve, (5) accuracy, and (6) precision. This figure details how often these six parameters appear together in the same study. This highlights differences in reporting between validation focused (N = 278) and nonvalidation focused papers (N = 113).
A total of 57 papers (39 validation and 18 nonvalidation) reported 5 out of the 6 analytical performance parameters. An evaluation was undertaken to determine the frequency of the missing parameter. Among these, 53% of papers failed to report the linearity of the calibration curve, 25% did not demonstrate typically needed levels of selectivity by use of either HRMS or a diagnostic ion, 16% omitted the sensitivity information on either the LOD or the LOQ, 5% neglected to report the range of the calibration curve, and 2% lacked details on recovery precision. Since linearity of the calibration curve was the most frequently omitted parameter, the reporting of this parameter was assessed for historical trends from 1990 to 2024 (see Figure S3D). Reporting rates for linearity increased from 55 to 90% for validation focused papers and from 37 to 70% for nonvalidation focused studies.
Reporting of the core analytical parameters reporting trends were further evaluated by examining them in pairs of assessments that are typically used in conjunction to support an analytical objective (see Table A–D). These objectives focus on method sensitivity (LOD and LOQ) calibration curve performance (linearity and dynamic range), matrix recovery performance (precision and accuracy), and selectivity (diagnostic ion and HRMS), respectively. Among validation focused papers, the majority reported key analytical parameters: 92% reported at least one sensitivity parameter (72% reported both), 88% reported at least one calibration curve parameter (77% reported both), and 97% reported at least one recovery parameter (95% reported both). For selectivity, 93% used a diagnostic ion, and 14% used both a diagnostic ion and HRMS. In contrast, nonvalidation focused papers reported these parameters at lower rates: 76% for at least one sensitivity parameter (59% reported both), 63% for at least one calibration curve parameter (45% reported both), and 72% for at least one recovery parameter (59% reported both). For selectivity, 80% reported a diagnostic ion, and 19% reported both diagnostic ion and HRMS. Overall, validation focused papers consistently showed higher reporting rates, 16–24% higher, for all key analytical parameters compared to nonvalidation focused studies.
2. Core Method Validation Assessments Grouped in Pairs by Validation Performance Metrics that the Assessments are Typically Used to Support .
This figure summarizes the reporting frequency of: (A) sensitivity assessments (LOD and LOQ), (B) calibration curve assessments (linearity and dynamic range), (C) recovery assessments (precision and accuracy), and (D) selectivity assessments (diagnostic ion and HRMS). The data, including studies from 1970–2024, highlights the differences in reporting rates between validation focused (N = 278) and non-validation focused papers (N = 113), demonstrating higher reporting frequencies in validation focused studies.
An additional layer of analysis was performed to assess the root cause of manuscripts that reported neither parameter within a single category (sensitivity, calibration curve performance, and matrix recovery performance) by manual review of papers (see Table A–C). The three main root causes were identified: (1) parameters were not applicable to the study; (2) parameters were unclearly or indirectly reported; or (3) parameters were applicable to the study but were not reported. For the 76 papers that did not report either calibration curve performance parameter, the 51% fell into category 1 (outside of study design), while 12% were in category 2 (indirect reporting), and 41% were in category 3 (not reported). While some omissions of analytical parameters are scientifically justified, the gaps in reporting identified in cases two and three could lead to misinterpretation of results or challenges in reproducing results. This underscores the importance of thorough analytical parameter reporting in both method validation focus and nonvalidation focused papers.
Remarks and Future Prospects
The reporting rates of analytical parameters including sensitivity, selectivity, accuracy, precision, linearity, and LODs/LOQs are reported more frequently in papers focused on method validation than those with different purposes. More in-depth analysis of pesticide residue publications has revealed significant disparities in the reporting of key analytical performance parameters, with validation focused manuscripts consistently providing detailed information about method performance parameters. This literature review of MS technology in J. Agric. Food Chem. demonstrated that historical trends generally shifted from traditional GC to UHPLC and HRMS, most recently.
The application of AI facilitated the data extraction and enhanced the overall analysis of reporting trends in pesticide residue literature. This application served as a test case to demonstrate the process and capability of generative AI tools in analytical chemistry, ultimately leading to accuracies in results comparable to expert review. However, like any other analytical tool, AI should be evaluated and validated to mitigate potential pitfalls and maximize effectiveness prior to implementation. The use of AI involves several key considerations. First, just as human experts can miss details, AI can also make mistakes due to unclear underlying data, difficult decision-making, or programming limitations. This emphasizes the importance of careful review of AI generated data. Furthermore, while AI does not possess motives or inherent biases, it can reflect the biases within the training data set. Therefore, it is important to exercise caution and critically evaluate AI outputs rather than accepting them unconditionally.
AI represents a potential transformative technology in the field of analytical chemistry. AI models already add efficiency to many aspects of analytical chemistry, including chromatography optimization and spectral deconvolution. Looking to the future, the integration of AI may further enhance analytical workflows by streamlining data extraction and optimizing method development. AI tools can assist in interpreting complex data sets and improving accuracy, potentially lowering barriers to adopting advanced methodologies. Overall, the combination of improved reporting practices and AI integration promises to drive innovation and enhance the reliability of pesticide residue analysis, addressing both safety concerns and the need for transparency in the agricultural sector.
Supplementary Material
Acknowledgments
The authors would like to express their sincere gratitude to Adnan Mustafic and Ismael M. Rodea Palomares for their invaluable support in artificial intelligence applications. We also extend our appreciation to Huizhe (Jessie) Jin for her expert statistical consultation, including guidance on power analysis. Additionally, we thank Yan Li, Peter Fischer and Robyn Moten for their assistance with the Web of Science searches and related tasks, which were essential to the success of this research.
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jafc.5c04574.
Methods utilized in the WoS search, database curation, AI deployment, AI performance verification, and historical trend analysis. In addition, the supplemental contains data tables and figures that provide additional information to the discussion in the manuscript (PDF)
Mention of brand or firm name does not constitute an endorsement by the U.S. Department of Agriculture above others of a similar nature not mentioned.
The authors declare the following competing financial interest(s): LSR and JS are employees of Bayer CropScience Company a developer of crop protection technologies and crop protection products.
References
- Lehotay S. J., Chen Y.. Hits and misses in research trends to monitor contaminants in foods. Anal. Bioanal. Chem. 2018;410:5331–5351. doi: 10.1007/s00216-018-1195-3. [DOI] [PubMed] [Google Scholar]
- Vicini J. L., Jensen P. K., Young B. M., Swarthout J. T.. Residues of glyphosate in food and dietary exposure. Compr. Rev. Food Sci. Food Saf. 2021;20(5):5226–5257. doi: 10.1111/1541-4337.12822. [DOI] [PubMed] [Google Scholar]
- Reeves W. R., McGuire M. K., Stokes M., Vicini J. L.. Assessing the safety of pesticides in food: How current regulations protect human health. Adv. Nutr. 2019;10(1):80–88. doi: 10.1093/advances/nmy061. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Winter C. K., Jara E. A.. Pesticide food safety standards as companions to tolerances and maximum residue limits. J. Integr. Agric. 2015;14(11):2358–2364. doi: 10.1016/S2095-3119(15)61117-0. [DOI] [Google Scholar]
- Ruiz-Angel M. J., García-Alvarez-Coque M. C., Berthod A., Carda-Broch S.. Are analysts doing method validation in liquid chromatography? J. Chromatogr. A. 2014;1353:2–9. doi: 10.1016/j.chroma.2014.05.052. [DOI] [PubMed] [Google Scholar]
- Gustavo González A., Herrador M. Á.. A practical guide to analytical method validation, including measurement uncertainty and accuracy profiles. TrAC, Trends Anal. Chem. 2007;26(3):227–238. doi: 10.1016/j.trac.2007.01.009. [DOI] [Google Scholar]
- Green J. M.. Peer reviewed: a practical guide to analytical method validation. Anal. Chem. 1996;68(9):305A–309A. doi: 10.1021/ac961912f. [DOI] [Google Scholar]
- AOAC Guidelines for Single Laboratory Validation of Chemical Methods for Dietary Supplements and Botanicals; AOAC International, 2002. [Google Scholar]
- Guidelines for Standard Method Performance Requirements AOAC International: Rockville, MD, USA; 2016. [Google Scholar]
- Thompson M., Ellison S. L., Wood R.. Harmonized guidelines for single-laboratory validation of methods of analysis (IUPAC Technical Report) Pure Appl. Chem. 2002;74(5):835–855. doi: 10.1351/pac200274050835. [DOI] [Google Scholar]
- Raposo F., Ibelli-Bianco C.. Performance parameters for analytical method validation: Controversies and discrepancies among numerous guidelines. TrAC, Trends Anal. Chem. 2020;129:115913. doi: 10.1016/j.trac.2020.115913. [DOI] [Google Scholar]
- Environmental Protection Agency . E. P. Office of the Secretary 40 CFR Part 160. 1989.
- Fajgelj, A. ; Ambrus, Á. . Principles and Practices of Method Validation; Royal Society of Chemistry, 2000. [Google Scholar]
- Agency, U. S. E. P. . Residue Analytical Method, OPPTS 860.1340: EPA 712–C–96–174, 1996. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P100G6S9.TXT.
- Commission, E. . Guidance Document on Pesticide Analytical Methods for Risk Assessment and Post-Approval Control and Monitoring Purposes, 2023.
- Commission, E. . Analytical Quality Control and Method Validation Procedures for Pesticide Residues Analysis in Food and Feed, 2024. https://food.ec.europa.eu/system/files/2023-11/pesticides_mrl_guidelines_wrkdoc_2021-11312.pdf.
- Guidance Document on Pesticide Residue Analytical Methods, Series on Pesticides and Biocides; OECD: Paris, 2017. 10.1787/5900bd. [DOI] [Google Scholar]
- Ayres L. B., Gomez F. J., Linton J. R., Silva M. F., Garcia C. D.. Taking the leap between analytical chemistry and artificial intelligence: A tutorial review. Anal. Chim. Acta. 2021;1161:338403. doi: 10.1016/j.aca.2021.338403. [DOI] [PubMed] [Google Scholar]
- Rial R. C.. AI in analytical chemistry: Advancements, challenges, and future directions. Talanta. 2024;274:125949. doi: 10.1016/j.talanta.2024.125949. [DOI] [PubMed] [Google Scholar]
- Yi L., Wang W., Diao Y., Yi S., Shang Y., Ren D., Ge K., Gu Y.. Recent advances of artificial intelligence in quantitative analysis of food quality and safety indicators: A review. TrAC, Trends Anal. Chem. 2024;180:117944. doi: 10.1016/j.trac.2024.117944. [DOI] [Google Scholar]
- Salamat Q., Gumus Z. P., Soylak M.. Recent developments and applications of artificial intelligence in solid/liquid extraction studies. TrAC, Trends Anal. Chem. 2025;182:118057. doi: 10.1016/j.trac.2024.118057. [DOI] [Google Scholar]
- Wei Q., Lv M., Wang B., Sun J., Wang D.. A comparative study of optimized conditions of QuEChERS to determine the pesticide multiresidues in Lycium barbarum using response surface methodology and genetic algorithm-artificial neural network. J. Food Compos. Anal. 2023;120:105356. doi: 10.1016/j.jfca.2023.105356. [DOI] [Google Scholar]
- Belaid W. F., Dekhira A., Lesot P., Ferroukhi O.. Development of deep learning software to improve HPLC and GC predictions using a new crown-ether based mesogenic stationary phase and beyond. J. Chromatogr. A. 2025;1739:465476. doi: 10.1016/j.chroma.2024.465476. [DOI] [PubMed] [Google Scholar]
- Héberger K.. Quantitative structure–(chromatographic) retention relationships. J. Chromatogr. A. 2007;1158(1–2):273–305. doi: 10.1016/j.chroma.2007.03.108. [DOI] [PubMed] [Google Scholar]
- Brown, T. ; Mann, B. ; Ryder, N. ; Subbiah, M. ; Kaplan, J. D. ; Dhariwal, P. ; Neelakantan, A. ; Shyam, P. ; Sastry, G. ; Askell, A. , Language Models are Few-Shot Learners, Advances in Neural Information Processing Systems; NeurIPS, 2020. 1877–1901. [Google Scholar]
- Bozkurt, A. Tell Me Your Prompts and I Will Make Them True: The Alchemy of Prompt Engineering and Generative AI; International Council for Open and Distance Education Oslo: Norway, 2024; Vol. 16, pp 111–118. [Google Scholar]
- Heller D. N., Lehotay S. J., Martos P. A., Hammack W., Fernndez-Alba A. R.. Issues in mass spectrometry between bench chemists and regulatory laboratory managers: Summary of the roundtable on mass spectrometry held at the 123rd AOAC International Annual Meeting. J. AOAC Int. 2010;93(5):1625–1632. doi: 10.1093/jaoac/93.5.1625. [DOI] [PubMed] [Google Scholar]
- Fekete S., Schappler J., Veuthey J.-L., Guillarme D.. Current and future trends in UHPLC. TrAC, Trends Anal. Chem. 2014;63:2–13. doi: 10.1016/j.trac.2014.08.007. [DOI] [Google Scholar]
- Losacco G. L., Veuthey J.-L., Guillarme D.. Supercritical fluid chromatography–Mass spectrometry: Recent evolution and current trends. TrAC, Trends Anal. Chem. 2019;118:731–738. doi: 10.1016/j.trac.2019.07.005. [DOI] [Google Scholar]
- Eliuk S., Makarov A.. Evolution of orbitrap mass spectrometry instrumentation. Annu. Rev. Anal. Chem. 2015;8(1):61–80. doi: 10.1146/annurev-anchem-071114-040325. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


