Skip to main content
ACS Omega logoLink to ACS Omega
. 2026 Mar 30;11(14):22210–22219. doi: 10.1021/acsomega.5c13352

Design and Research of an Intelligent Digestion System Based on Machine Vision and Full-Process Control

Zhihui Jiang 1, Tiewei Shang 1,*, Linge Ma 1, Shuai Zhao 1, Rui Wang 1, Zixin Wang 1, Liwei Yao 2, Shangjie Yao 2
PMCID: PMC13084451  PMID: 42004391

Abstract

This research proposes an intelligent digestion system that integrates full-process automation with machine vision technology, aiming to overcome the limitations of operator-dependent procedures and subjective end point assessment in traditional digestion processes. The system employs collaborative robotic arms, electric grippers, and high-precision peristaltic pumps to enable fully automated acid addition, heating, and volume calibration operations, with integrated safety features including an acid mist absorption unit and real-time liquid level monitoring. The end point determination module utilizes a machine vision model in conjunction with an optical turbidity detection unit to achieve dual-mode verification of digestion completion, where the turbidity measurement subsystem attains a precision of ±0.5 NTU (nephelometric turbidity units). This system enables end point validation through analysis of liquid transmittance characteristics combined with visual feature recognition from real-time imaging. The system supports simultaneous batch processing of up to 24 samples and is equipped with an integrated acid vapor condensation recovery unit. Upon completion of the digestion process, the system automatically switches to standby mode. Comparative evaluation between the intelligent digestion system and conventional manual operation demonstrates that the automated system achieves equivalent completeness of digestion, while significantly reducing processing time and reagent consumption. A quantitative analysis of the characteristic elements (e.g., Al, Ca, Cu, Fe, Li, and Na) in the resulting digestates was conducted, and it was found that there was excellent agreement with the manual method. This serves to further validate the technical reliability and analytical consistency of the proposed system. In conclusion, the system demonstrates robust adaptability to end point detection requirements for complex matrices such as catalysts, providing a reliable smart automation solution for sample pretreatment in environmental monitoring and pharmaceutical applications.


graphic file with name ao5c13352_0010.jpg


graphic file with name ao5c13352_0008.jpg

0. Introduction

Sample pretreatment is a critical step in the analytical process within environmental testing and pharmaceutical fields, directly influencing detection efficiency and data reliability. Traditional digestion methods heavily rely on the operator’s manual experience, such as visually estimating the volume of acid added. Whether digestion is complete is entirely judged by the operator through visual observation, which leads to a high misjudgment rate and easily results in incomplete digestion or overoxidation. During the digestion process, acid vapors are often directly released without recovery devices, causing environmental pollution in the laboratory. Currently, there are already mature digestion instruments on the market, such as graphite digestion systems and microwave digestion systems. Although progress has been made in multichannel processing, programmed temperature control, and partial automation, the focus of improvements has primarily been on automated process control and optimization of thermal or microwave field uniformity. From the perspective of the core objectives of sample pretreatment, achieving complete digestion of the sample is crucial. However, whether for graphite digestion systems or microwave digestion systems, the determination of digestion end points still relies on operator experience, lacking intelligent and standardized analytical techniques based on liquid transmittance characteristics and image features. Furthermore, in terms of safety, most instruments have not yet integrated acid vapor condensation and recovery modules, leaving room for improvement in their environmental performance.

A computer vision system mainly consists of an optical imaging system, image capture system, image acquisition and digitization module, intelligent processing and decision-making module, and control execution module. In simple terms, the image capture, acquisition, and capture modules acquire image features and digitize them, while the intelligent processing and decision-making module generates judgment results. Currently, computer vision is currently widely applied in industrial and agricultural fields, such as deep learning-based power facility map image recognition and processing algorithms, as well as pest monitoring automation systems in grain storage warehouses based on deep learning models, etc. The application of computer vision in analytical chemistry is relatively limited. Capitán-Vallvey et al. summarized the methods and systems of computer vision in analytical chemistry from 2005 to 2015, highlighting the potential of computer vision in chemical applications. This paper presents a fully automated intelligent digestion system that, based on complete automation, integrates computer vision models and optical detection results to establish a dual-mode end point determination mechanism. By incorporating image recognition and processing algorithms as well as an optical turbidity detection module, the system achieves dual validation through analysis of liquid transmittance changes and image feature matching, enabling high-precision determination of the digestion end point. The system features a redesigned automation architecture based on a high-precision peristaltic pump and temperature control module, enabling fully automatic operations including acid addition, heating, volume fixation, and sealing. Additionally, an environmentally friendly system design is incorporated, featuring an integrated acid vapor absorption device. Combined with real-time liquid level monitoring, the system can automatically switch to standby mode to reduce energy consumption.

The intelligent digestion system operates according to the principle of wet digestion. Although it is inferior to microwave digestion in processing extreme samples, it possesses irreplaceable advantages in safety, process controllability, and adaptability to specific scenarios. Compared to microwave-assisted digestion systems, wet digestion has advantages in processing temperature-sensitive samples at moderate to low temperatures and in the pretreatment of volatile elements, especially in industries such as food and feed.

1. System Design

1.1. Overall Framework Design

The overall design of the intelligent digestion system is illustrated in Figure , and its overall operational logic is shown in Figure . It achieves full-process automated digestion through hardware modular integration and software model fusion. The execution layer employs a collaborative robotic arm, electric gripper, high-precision peristaltic pump, and graphite digestion heater to complete automatic acid addition, constant volume adjustment, and temperature-programmed heating–digestion–cooling cycle control, accompanied by an acid mist absorption device; the perception layer is equipped with an image acquisition module and an optical turbidity detection module; the decision layer establishes a neural network-based visual algorithm to analyze liquid color and bubble features for end point identification and integrates dual verification of turbidity and visual matching degree to address the limitations of single-parameter judgment, realizing fully closed-loop automated digestion from precise control to intelligent decision-making.

1.

1

Schematic diagram of system design.

2.

2

Smart digestion system workflow.

1.2. Key Module Design

As shown in Figure , the overall apparatus can be divided into 11 modules: acid gas absorption module, reagent module, visual judgment module, constant volume module, barcode reading module, heating module, optical judgment module, sealing module, sample temporary storage module, transfer module, and safety module. The functions and key components of each module are presented in Table .

1. System Module Composition and Corresponding Module Functions.

number key modules critical components implemented functions
1 acid gas absorption module alkali storage tank, corrosion-resistant suction pump capture and absorb highly corrosive acid mist
2 reagent module high-precision peristaltic pump, liquid storage bottle, liquid level sensor, corrosion-resistant liquid transfer line precision storage, quantitative measurement, and controlled transfer of digestion reagents
3 visual judgment module industrial camera, light source, optical lens real-time capture of sample morphological changes
4 constant volume module industrial camera, light source, liquid circuit switch valve precision dispensing of solvents into digestion tubes
5 barcode reading module barcode/QR code scanner automated identification of barcode/QR code tags on sample tubes or sample racks
6 heating module graphite block heater, thermocouple, temperature controller providing controllable and uniform thermal source for driving digestion reactions
7 optical judgment module customized optical turbidity sensor for optical characterization of suspended particulates detection of optical characteristics of digestion solutions (transmittance at specific wavelengths)
8 sealing module three-finger gripper clamp seal the sample tube
9 sample temporary storage module customized sample holder designated storage zones for sample tubes in digestion status: pending, in-process, and completed
10 transfer module mechanical hand, electric gripper automated precision transport of sample tubes between functional modules within equipment
11 safety module multi-sensor system (temperature, liquid leak, door magnetic, motor overload, etc.), warning light, emergency stop button monitoring equipment operational status and critical parameters, triggering protective measures upon abnormalities

The transfer module, as shown in Figure a, achieves the transfer of sample tubes through the use of a collaborative robotic arm equipped with an electric gripper. The collaborative robotic arm utilizes a four-axis lightweight collaborative robot, and the electric gripper is a high-performance automated end-effector that integrates both rotational and gripping functions, supporting continuous 360° rotation and relative rotation modes. The reagent module, as shown in Figure b, is equipped with a high-precision peristaltic pump and reagent bottles made of various materials. The pump body is constructed from corrosion-resistant stainless-steel and PTFE (Teflon), suitable for handling complex media such as acids, bases, and organic solvents. It supports multichannel independent acid addition with automatic switching among HCl, HNO3, H2SO4, and HF and is integrated with automatic liquid level sensing and fault warning functions. The visual judgment module, as shown in Figure c, is equipped with two industrial cameras for identifying solid contents within digestion tubes and liquid levels in the tubes. The system utilizes Hikvision industrial cameras with CCD sensors. By employing a global shutter CCD sensor, it enables simultaneous identification of digestion progress and real-time liquid level monitoring. The vision module achieves a resolution of 1280 × 960 pixels, a frame rate of 30 fps, a dynamic range of 60 dB, and a signal-to-noise ratio (SNR) of 37 dB. It can capture high-detail images under both strong and weak lighting conditions. The optical judgment module is implemented through customized optical detection methods, such as turbidity sensors. As shown in Figure d, the turbidity sensor and its supporting components form an optical module, which consists of an 850 nm infrared LED light source and a silicon photodiode. The module has a nominal full-range dynamic detection range of 0–1000 NTU with a resolution of 0.1 NTU and a nominal accuracy of ±3% FS; for the actual narrow measurement range of 0–20 NTU in this study’s wet digestion experiments, the system achieved an optimized actual absolute measurement accuracy of ±0.5 NTU after multipoint linear calibration, environmental error compensation, and parallel verification.

3.

3

Key modules

2. Integrated Models in the System

2.1. Liquid Volume Quantification Detection Model

The intelligent digestion system achieves dynamic and precise volume control, with liquid volume quantification realized via a two-stage target localization algorithm based on an improved YOLO11s architecture. This algorithm adopts a cascaded detection network: The first stage localizes the test tube body through high-precision bounding box regression, and the second stage incorporates a pixel-level liquid surface edge segmentation module combined with a morphological gradient optimization algorithm to achieve subpixel level localization of the liquid–air interface. A geometric spatial mapping model is subsequently established, which defines the nonlinear mapping relationship between the image coordinate system and physical space based on a preset calibration matrix. Perspective projection correction technology is applied to eliminate optical distortion, thus constructing a conversion equation (eq ) as follows between pixel height H pixel and actual physical height H real.

Hreal=α×Hpixel+β×ln(1+Hpixel/γ) 1

The parameters α, β, and γ in eq are obtained through the Levenberg–Marquardt algorithm by fitting calibration data.

All test tubes used in the experiment are cylindrical. Given the radius r of the test tube, the actual volume can be calculated using eq :

V=πr2×Hreal 2

2.2. Deep Learning-Based Digestion Status Assessment Model

2.2.1. Overall Model Design

The intelligent digestion system integrates a machine vision-based digestion status assessment model to achieve high-precision, real-time, and automated discrimination of sample digestion status. YOLO11s is selected as the backbone network, and model optimization is further improved by dynamic learning rate scheduling, adaptive regularization, early stopping mechanisms, and the integration of Focaler-IoU series loss functions. These specific optimizations for YOLO11s (targeted at the digestion status assessment task) and the model’s input/output tensor details are clearly illustrated in Figure . Specifically, the model input tensor is 640 × 640 × 3 (corresponding to three-channel color images of the wet digestion process), and the output tensor is N × 6 (N represents the number of detected targets, including four bounding box coordinates, one confidence score, and one classification probability for digestion status-related features).

4.

4

Model framework flowchart. Note: The model input is 640 × 640 × 3 (corresponding to three-channel color images of the wet digestion process), and the output tensor is N × 6 (N is the number of detected targets, six includes four bounding box coordinates, one confidence score, and one classification probability of digestion status-related features). For the digestion status assessment task, the optimizations of YOLO11s include dynamic learning rate scheduling, adaptive regularization, early stopping mechanism, and integration of Focaler-IoU series loss functions.

Under laboratory conditions, images of the wet digestion process are collected, covering various states such as undigested, partially digested, and fully digested samples. These images are annotated using LabelImg/Roboflow annotation tools, focusing on marking features like precipitates, suspended particles, and bubbles. To improve the model’s generalization ability, original images are subjected to augmentation processes, including cropping, rotation, brightness adjustment, color temperature adjustment, blurring, and noise addition.

2.2.2. Key Modules of the Model

The model primarily consists of four key modules: image acquisition, deep learning processing, digestion degree assessment, and result output. The main components and their functions are summarized in Table . The image acquisition module captures real-time images and transmits them, after preprocessing, to the deep learning processing module. The deep learning processing module analyzes the images and outputs detection results, including bounding box coordinates, confidence scores, and class labels. The digestion degree assessment module determines the current digestion status based on detection results and incorporates historical data for temporal analysis. The result output module visualizes the assessment results and stores the associated data.

2. Modules of the Deep Learning-Based Dissolution State Judgement Model.
1 Image acquisition module industrial HD camera 1080p resolution industrial camera, 30 fps frame rate, supports night vision mode
light source control system equipped with multiple LED light sources, to ensure clear visibility of the contents of test tubes
image preprocessing noise removal, illumination equalization, geometric correction
data buffer unit temporarily stores acquired image sequences, supports retrospective analysis
multiangle acquisition robotic arm rotates the test tube, to capture images from multiple angles
2 deep learning processing module model loading module responsible for loading pretrained YOLOv11 model weights
feature extraction network incorporates an enhanced C3K2 backbone network to extract multilevel features from images
feature fusion network incorporates SPFF module, specializes in multiscale feature fusion
detection head network incorporates C2PSA module, specializes in generating final detection outputs
inference acceleration unit supports TensorRT/ONNX optimization
3 decision-making module result parsing unit decodes model outputs to extract bounding boxes, confidence scores, and class labels
state classification unit resolve states based on detection results
timing analysis unit analyzes consecutive multiframe detection results, enhances judgment stability
adaptive threshold unit dynamically adjusts judgment threshold based on historical data
4 output module visualization display unit real-time display of detection results and mitigation status
data record unit key data and status changes during the resolution process

2.2.3. Training Environment

The system hardware configuration and other training environment parameters are summarized in Table .

3. Model Training Parameters.
  GPU
CPU
memory
hardware environment NVIDIA 4080TI 16G I7 13700KF 64G
training parameters batch size number of Iterations input resolution
16 200 epoch 640 × 640
evaluation metrics mAP50, mAP50–95 recall precision

2.2.4. Model Training

Five rounds of optimized training were conducted, and the results are summarized in Table . As the sample data set expanded, the model incorporated various annotation strategies during training, with the holistic annotation approach ultimately demonstrating optimal performance and highest efficiency. The iterative annotation optimization process for precipitate detection in test tubes revealed critical performance correlations with annotation strategies. Initial attempts (session 1) employing localized bottom-area annotations (Figure a) yielded suboptimal metrics across all evaluation criteria, particularly mAP50–95 (43.6%). Transitioning to polygonal contour annotations (session 2) improved precision, recall, and mAP50 by 12., 9.8%, and 15.6% respectively, though mAP50–95 remained constrained. Session 3 introduced data set expansion through white/yellow-black precipitate specimens and bubble-only negative controls, though complexity-induced fluctuations (−1.2% mAP50) necessitated methodological refinement. The pivotal breakthrough emerged in session 4 through holistic annotation adoption-uniform bounding rectangles encompassing entire bottom regions (Figure c)eliminating size/morphology biases while maintaining precipitate presence as the definitive positive criterion.

4. Digestion Model Training.
  data split
  performance metrics (%)
no. training set validation set test set annotation method precision recall mAP50 mAP50–95
1 102 9 5 the test tube bottom region 72.1 60.0 66.6 43.1
2 129 13 6 particle boundaries 82.0 74.4 81.2 43.6
3 660 65 31 particles and bubbles 76.3 68.0 73.2 43.2
4 984 96 46 holistic annotation 97.5 90.4 94.8 86.8
5 4446 425 211 holistic annotation 99.1 98.4 99.4 99.2
5.

5

(a–c) Different methods and (d, e) confusion matrix of the machine vision model.

Session 5 further optimized this framework through systematic data set augmentation (5082 images) capturing diverse dissolution states and substances. The augmented data set was randomly split into a training set (4446 images), validation set (425 images), and test set (211 images) with no sample overlap to avoid data leakage, and regularization and early stopping strategies were adopted to prevent overfitting. This optimization achieved unprecedented performance: precision (99.1% ± 0.3), recall (98.4% ± 0.5), mAP50 (99.4% ± 0.2), and mAP50–95 (99.2% ± 0.4). The confusion matrix of the model (Figure ) further quantifies this performance, showing that the model correctly identifies 99% of sediment samples with minimal misclassification between sediment and background categories, providing direct evidence of its robustness. This trajectory validates the holistic annotation paradigm’s superiority in enhancing generalization capacity while maintaining detection accuracy across experimental variables, establishing a new benchmark for computer vision applications in chemical analysis.

2.2.5. Model Comparative Analysis

A comparative analysis with other models was conducted, and the results are presented in Table . YOLOv11s achieves an excellent balance between model size and performance. While models such as YOLO-NAS, Roboflow 3.0, and RF-DETR demonstrate comparable levels of accuracy and recall, their model volumes are substantially larger, at 152.83 MB, 100.5 MB, and 325.64 MB, respectively. In contrast, the size of the YOLOv11s model is only 18.2 MB, making it significantly more lightweight. Concurrently, it achieves a precision of 99.1%, a recall of 98.4%, and an mAP@0.5 as high as 99.4%, demonstrating exceptional detection accuracy and robustness. Therefore, YOLOv11s maintains a compact architecture while achieving high precision and high recall, with its overall performance being significantly superior to that of other mainstream models.

5. Model Comparison Results.
model model size/MB P/% R/% mAP@0.5%
Yolov8 5.95 90.5 91.8 76.4
Yolov12 5.21 97.3 90.7 83.2
Yolov-nas 152.83 97.5 91.9 94
Roboflow 3.0 100.5 97.6 93 95.8
RF-DETR 325.64     92.6
Yolov11s 18.2 99.1 98.4 99.4

2.3. Dual-Mode Fusion Mechanism

The dual-mode fusion framework enables algorithmic determination of digestion end points by synergizing optical turbidity sensing and visual analysis. This system integrates two complementary data streams: An optical turbidity module measures transmittance at 850 nm within the digestate, while an industrial camera captures high-resolution images of the reaction solution. A dedicated vision model extracts morphological features (transparency metrics and particulate distribution patterns) from the visual data, while the turbidity sensor provides quantitative absorption characteristics. A transmittance-visually weighted confidence mapping architecture dynamically allocates fusion weights based on modality-specific relevance, establishing a context-aware decision threshold. Mathematical formulations governing this fusion process are rigorously defined in eqs and , enabling robust end point detection through multimodal feature synergy.

Cf=α×f(Cv)+(1α)×(1|T(t)Tstd|ΔTmax) 3
f(Cv)={CvCv0.950.95+(Cv0.95)×βCv>0.95 4

C f: fusion confidence (ranging from 0 to 1);

α: weight of the visual model;

C v: confidence output by the visual model (ranging from 0 to 1);

T (t): real-time turbidity measurement;

T std: reference turbidity value at complete digestion;

ΔT max: maximum allowable deviation in turbidity.

The dual-mode fusion mechanism effectively reduces overconfidence in end point determination systems that rely solely on visual or turbidity measurements. In this intelligent digestion framework, machine vision models have better discriminative ability through key morphological features, such as particle dissolution kinetics and chromatic homogeneity, while optical turbidity sensors provide complementary quantitative absorption data. To improve decision robustness, the framework uses a weighted fusion architecture, with 70% confidence assigned to vision-based predictions (α = 0.7) and 30% to optical metrics. This weight allocation is based on the dominant role of image features in end point judgment, and the 30% weight of optical detection compensates for potential interference in visual detection, balancing perceptual accuracy and environmental interference mitigation. This adaptive weighting strategy offsets possible visual occlusions or lighting artifacts, while maintaining strict end point criteria based on precipitate characteristics, forming a rigorous framework for automated digestion monitoring.

endpointtriggeringCfγ×Cfmax

C f : ideal-scenario maximum (when C v = 1, T(t) = T std, C f = α · 1 + (1 – α) · 1 = 1) ;

γ: confidence coefficient (suggestion γ ∈ [0.95,0.98]) .

3. Experimental Verification

3.1. Repeatability Positioning Experiment

Repeatability positioning accuracy (RPA), which generally characterizes a machine axis’ capability to return to a previously attained position, was evaluated for the robotic arm across five mission-critical operational locations within the system: a specific digestion well in the heating module (P1), a position in the dilution module (P2), the vision judgment module position (P3), the optical detection module position (P4), and the sealing module position (P5). For each selected position, five independent repeatability tests were conducted along the X, Y, and Z axes, with the repeatability positioning accuracy of these key points calculated in accordance with the GB/T 12642-2013 standard, ″Industrial Robot Performance Specification and Test Methods″. The data variation for the five movements along the X, Y, and Z axes is illustrated in Figure . The RPA­(X), RPA­(Y), RPA­(Z), and the combined RPA­(X,Y,Z) at the five positions were computed using eqs –; taking the three-dimensional coordinate (X, Y, Z) as an example, the calculation process proceeds as follows: first, (, , ) centroid coordinates are derived using eq ; next, the spatial deviation Li for each trial is sequentially calculated via eq ; then, the sample standard deviation S is determined using eq ; and finally, the comprehensive RPA­(X,Y,Z) is computed using eq .

(,,)=i=1n(xi,yi,zi)/n 5
Li=(xi)2+(yi)2+(zi)2 6
S=i=1n(Li)2/(n1) 7
RPA(x,y,z)=S×3 8

6.

6

Repeatability accuracy experiment.

The calculated RPA values presented in Table and Figure demonstrate consistent performance across all spatial dimensions for the five critical measurement positions (P1–P5). Positional deviations along individual X/Y/Z axes remained ≤0.022 mm, with composite RPA­(X,Y,Z) values exhibiting system-wide precision of ≤0.0257 mm at positions P4 and P5locations showing minor Y-axis fluctuations (±0.018 mm). This performance exceeds the 0.03 mm threshold required for high-precision industrial automation, confirming the system’s capability to maintain submillimeter accuracy during multiaxis operations. The observed stability aligns with GB/T 12642-2013 specifications for robotic positioning systems, establishing robust operational reliability under continuous workflow conditions.

6. Calculation Results of Repeated Positioning Accuracy.

robot arm positioning points RPA(X) RPA(Y) RPA(Z) RPA(X,Y,Z)
P1 0.0161 0.0212 0.0080 0.0058
P2 0.0178 0.0098 0.0287 0.0120
P3 0.0203 0.0251 0.0332 0.0135
P4 0.0432 0.0294 0.0224 0.0159
P5 0.0227 0.0551 0.0433 0.0257

3.2. Digestion Experiment Validation

3.2.1. Experimental Design

The experimental design used six types of test samples with different matrix characteristics (Table ), with initial acid volumes and one-time addition method for each sample detailed in Table . Each sample type included duplicate samples prepared under the same conditions. After precise gravimetric quantification, samples were randomly divided into two groups: an automated digestion system (Automated Group) and a manual operation control group (Manual Group). Both groups used the same batch of high-purity reagents and identical initial conditions: Initial acid was added once as detailed in Table . The Automated Group operated with preset parameters, adopting a combined method of neural network vision model and optical detection to determine digestion completeness in real time. The vision model identified the visual morphology of the digestion system, while optical detection captured the physicochemical characteristics of the system; the two complementary techniques eliminated the limitations of single-method judgment and improved the accuracy of end point determination and supplementary acid addition. In contrast, the Manual Group followed standardized digestion protocols, with the operator judging both the end point and supplementary acid addition based on visual observation only. Comparing results between the two groups quantified the effectiveness of automation in standardizing digestion, with differences in final acid volumes due to different criteria for determining supplementary acid addition.

7. Sample Information and Methods.
    acid type and volume/mL
   
number sample type HCl HNO3 HF temperature/°C volume/mL
S1 soluble metal salts 0 0 0 0 50
S2 insoluble molten sheets 3.0 1.0 1.0 110 50
S3 Fischer–Tropsch synthesis organic water 0 3.0 0 105 20
S4 methanol catalysts 1.5 0.5 0 110 50
S5 iron-based catalysts 1.0 3.0 0.5 110 50
S6 cobalt-based catalysts 3.0 1.0 0 110 50

3.2.2. Digestion Data Comparison

As shown in Table , the intelligent digestion system demonstrated significant comprehensive performance advantages over manual operations: In terms of total sample preparation time, except for S1 soluble metal salts where the manual group had a slight advantage due to no need for end point judgment, the processing time of the intelligent group for S2 to S6 samples was significantly shortened, which is attributed to its efficient synergistic mechanism among the execution layer, perception layer, and decision layer, whereas the manual group relied on empirically driven iterative acid replenishment operations, which not only prolonged the processing cycle but also led to systematically higher acid reagent consumptionas shown in Table , the average acid reagent consumption of the manual group for S2–S6 samples was 18.2–33.3% higher than that of the intelligent group. Critically, no statistically significant difference was observed in the turbidity detection values of the digestate between the two groups, with a calculated P-value of 0.339 > 0.05, indicating that the intelligent system, while ensuring digestion completeness, simultaneously achieved a dual breakthrough in improving processing efficiency and resource intensification. This technical superiority was more prominent in complex matrix samples such as catalysts, where the intelligent system could shorten the digestion time by up to 68.8% (from 240 to 75 min) without affecting analytical accuracy.

8. Comparison of digestion data.
  total preparation time/min
total acid consumption/mL
turbidity value/NTU
number intelligent group manual group reduction rate intelligent group manual group error intelligent group manual group P-value
S1 5 2 –150% 0 0 0 0.2 0.4 0.339
S2 75 150 50.0% 5.0 7.0 28.6% 0.8 1.4
S3 150 180 16.7% 3.0 4.5 33.3% 0.5 0.7
S4 40 65 38.5% 4.0 5.5 27.3% 1.5 1.6
S5 75 240 68.8% 4.5 5.5 18.2% 1.4 2.0
S6 75 210 64.3% 8.0 12 33.3% 0.9 1.1

3.2.3. Digestion Outcome Comparison

As illustrated in Figure , comparative quantitative analysis of characteristic elements (Al, Ca, Cu, Fe, Li, and Na) in digestion solutions via ICP-OES (inductively coupled plasma optical emission spectrometry) revealed no statistically significant differences between the intelligent and manual groups. Taking the representative sample S2 (insoluble fused disc) as an example, the sodium concentration measured in the intelligent group was 110.40 mg/L, while the manual group yielded 114.62 mg/L, with a relative deviation of 3.8%well below the 5% threshold specified in GB/T 27417-2017 Conformity assessmentGuidance on validation and verification of chemical analytical methods. This high level of consistency validates the technical reliability of the automated system.

7.

7

Elemental analysis results of samples S1–S6 by ICP-OES.

4. Conclusions

This paper develops a fully automated intelligent digestion system suitable for routine laboratories in environmental monitoring, food testing, and other applications. The main conclusions are as follows:

  • 1)

    The intelligent digestion system achieves full-process unmanned operation through an execution–perception–decision three-layer closed-loop architecture, integrating optical turbidity detection with machine vision models to increase the accuracy of digestion end point determination to 99.1%; high-precision peristaltic pumps combined with acid vapor absorption devices comply with green laboratory standards.

  • 2)

    To address the inherent defects of traditional digestion equipment, the dual-mode fusion mechanism dynamically adjusts the weights of optical and visual components, effectively handling extreme scenarios such as high-chroma digestion solutions and bubble-containing samples; real-time intermittent recording of time-series data of temperature, turbidity, and images supports backtracking analysis of digestion anomalies.

  • 3)

    The intelligent digestion system has strong applicability in the fields of food and pharmaceutical manufacturing and environmental monitoring; ICP-OES detection results show no statistical difference from those of manual digestion, with good data reproducibility.

The authors declare no competing financial interest.

References

  1. Ge C., Lao F., Li W., Li Y., Chen C., Qiu Y., Mao X., Li B., Chai Z., Zhao Y.. Quantitative Analysis of Metal Impurities in Carbon Nanotubes: Efficacy of Different Pretreatment Protocols for ICPMS Spectroscopy. Anal. Chem. 2008;80(24):9426–9434. doi: 10.1021/ac801469b. [DOI] [PubMed] [Google Scholar]
  2. Simoes F. R. F., Batra N. M., Warsama B. H., Canlas C. G., Patole S., Yapici T. F., Costa P. M. F. J.. Elemental Quantification and Residues Characterization of Wet Digested Certified and Commercial Carbon Materials. Anal. Chem. 2016;88(23):11783–11790. doi: 10.1021/acs.analchem.6b03407. [DOI] [PubMed] [Google Scholar]
  3. Ben Ghorbal Y., Lyczko N., Arlabosse P., Corbin A., Chaucherie X., Gosset T., Nzihou A.. Comparing Borate Fusion and Microwave Digestion in the Challenging Dissolution and Quantification of Inorganic Elements from Bottom Ash. ACS Omega. 2025;10(39):45334–45341. doi: 10.1021/acsomega.5c04937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. John M. K.. Automated digestion system for safe use of perchloric acid. Anal. Chem. 1972;44(2):429–430. doi: 10.1021/ac60310a060. [DOI] [Google Scholar]
  5. Madika B., Saha A., Kang C., Buyantogtokh B., Agar J., Wolverton C. M., Voorhees P., Littlewood P., Kalinin S., Hong S.. Artificial Intelligence for Materials Discovery, Development, and Optimization. ACS Nano. 2025;19(30):27116–27158. doi: 10.1021/acsnano.5c04200. [DOI] [PubMed] [Google Scholar]
  6. Wang, L. ; Chen, F. ; Wang, Y. ; Liu, Y. ; Luo, C. . Image Recognition and Processing Algorithm of Power Grid Facilities Map Based on Deep Learning. In 2024 Asia-Pacific Conference on Software Engineering, Social Network Analysis and Intelligent Computing (SSAIC), IEEE: New Delhi, India, 2024, pp 17–21. [Google Scholar]
  7. Jamali P. V., Nambi E., Loganathan M., Saravanan S., Chandrasekar V.. Rice-YOLO: An Automated Insect Monitoring in Rice Storage Warehouses with the Deep Learning Model. ACS Agric. Sci. Technol. 2025;5(2):206–221. doi: 10.1021/acsagscitech.4c00633. [DOI] [Google Scholar]
  8. Capitán-Vallvey L. F., López-Ruiz N., Martínez-Olmos A., Erenas M. M., Palma A. J.. Recent developments in computer vision-based analytical chemistry: A tutorial review. Anal. Chim. Acta. 2015;899:23–56. doi: 10.1016/j.aca.2015.10.009. [DOI] [PubMed] [Google Scholar]

Articles from ACS Omega are provided here courtesy of American Chemical Society

RESOURCES