Skip to main content
ACS Omega logoLink to ACS Omega
. 2023 Sep 27;8(40):37482–37489. doi: 10.1021/acsomega.3c05780

Library-Based Raman Spectral Identification Using Multi-Input Hybrid ResNet

Tiejun Chen 1, Sung-June Baek 1,*
PMCID: PMC10568588  PMID: 37841175

Abstract

graphic file with name ao3c05780_0006.jpg

Raman spectroscopy is widely used for its exceptional identification capabilities in various fields. Traditional methods for target identification using Raman spectroscopy rely on signal correlation with moving windows, requiring data preprocessing that can significantly impact identification performance. In recent years, deep-learning approaches have been proposed to leverage data augmentation techniques, such as baseline and additive noise addition, in order to overcome data scarcity. However, these deep-learning methods are limited to the spectra encountered during training and struggle to handle unseen spectra. To address these limitations, we propose a multi-input hybrid deep-learning model trained with simulated spectral data. By employing simulated spectra, our method tackles the challenges of data scarcity and the handling of unseen spectra encountered in traditional and deep-learning methods. Experimental results demonstrate that our proposed method achieves outstanding identification performance and effectively handles spectra obtained from different Raman spectroscopy systems.

Introduction

Raman spectroscopy is an excellent analytical technology widely used in various technical fields, especially for substance identification. Typically, traditional target substance detection and identification methods using Raman spectroscopy rely on library-based algorithms. With the continuous improvement of this technology, the use of miniaturized Raman spectrometers is gaining momentum in fields, such as explosives and poison detection, as well as pharmaceutical analysis.14

In the analysis of Raman spectra, most traditional substance identification methods employ spectral matching algorithms based on calculating similarity criteria between unknown spectra and reference spectra in a spectral data library.5,6 The information obtained from Raman spectral signals, including intensity, position, and width of spectral peaks, is crucial. However, Raman spectral signals are easily affected by factors, such as substance density, external light sources, and environmental noise, resulting in the presence of baseline and additive noise in the obtained spectral signals.

The influence of these factors necessitates cumbersome preprocessing steps in traditional target substance identification methods to obtain more accurate identification results. Typically, preprocessing steps involve using a moving average filter7 or a Savitzky–Golay filter8 to remove additive noise, as well as utilizing methods like the asymmetrically reweighted penalized least-squares method,9 adaptive smoothness parameter penalized least-squares method,10 or deep-learning methods11 to eliminate baseline.

Each preprocessing step involves various methods, and the selection of appropriate methods is crucial to achieve accurate identification results. Additionally, many preprocessing methods require parameter adjustments to achieve desired results. Moreover, preprocessing can inadvertently eliminate important information buried in the noise and affect the identification results.

In addition to these factors, different Raman spectrometers yield slightly different spectra, requiring the corresponding spectral processing techniques. Recent proposals include the segmental hit-quality index method5 and adaptive hit-quality index method12 to address this issue.

Among traditional classifiers, the support vector machine (SVM) method is advantageous in small-sample pattern recognition. By combining principal component analysis (PCA) or other dimension reduction algorithms, SVM can perform well even on high-dimensional spectral data. SVM has been analyzed to perform relatively well compared to various machine learning classification models.13 However, regardless of general matching methods and traditional machine learning methods, various preprocessing steps are required to improve identification performance.

Recently, with the rapid development of deep-learning technology, particularly, convolutional neural networks (CNNs), which are a branch of deep-learning networks, have been applied in various fields. CNN can extract essential features from low-level data and exhibit a better classification ability than conventional machine learning methods. The structure of CNN generally consists of two parts: feature extraction and classification. The feature extraction part extracts useful features from the original input data, which are then utilized for classification tasks in the classification part. Compared with the traditional classification methods, CNN-based methods do not require feature engineering and can be trained end-to-end without manual tuning. As a result, researchers have proposed using deep learning for target substance detection and identification in Raman spectra.14,15

However, deep-learning models are data hungry and heavily rely on a substantial amount of labeled training data for accurate predictions. Acquiring a significant amount of actual Raman spectral data to adequately train the deep-learning model poses a challenge. Consequently, the application of substance identification based on deep-learning technology using Raman spectroscopy encounters a dilemma. To tackle the shortage of actual Raman spectral data, several studies have suggested the utilization of data augmentation methods.16,17

Data augmentation involves introducing additive noise and baseline to the actual Raman spectral signal for identification, as well as randomly shifting the spectral signal a few wavenumbers to the left or right.18,19 While data augmentation methods can generate a substantial number of augmented Raman spectra, this approach can lead to overfitting6 since the augmented data are derived from the given real spectra. So, it is important to recognize that this method can only identify specific spectral categories that were trained by the deep-learning model. In addition, as the number of spectral categories to be identified increases, the accuracy of the model identification may be adversely affected.

In this study, a novel identification algorithm based on a Raman spectral library using a multi-input hybrid deep-learning model is proposed. This method can effectively solve the disadvantage of a general deep-learning model that can only identify specific spectra and can improve identification accuracy of multiple category problems by simplifying the multiclassification problem into binary classification. To address the limited availability of actual spectral data, the proposed model utilizes a simulation method to generate a substantial amount of training data instead of relying on a complex augmentation method with a restricted number of actual spectral data.

Proposed Method

Data Preparation

To overcome the lack of measured spectra, we generated the simulated spectral data by randomly combining baseline, peak, and additive noise components. While specific details about the method for generating simulated data are not provided here, you can refer to Chen et al.11 for further details on the generation procedure.

The primary objective of this study is to develop a method capable of accurately identifying materials not only within the same Raman spectroscopy system but also across different Raman spectroscopy systems. To achieve this, a training data set is created using simulation data that incorporate various background noises. This training data set forms the foundation for training the proposed method and enhancing its performance in identifying Raman spectra obtained from different spectroscopy systems.

The spectra obtained from different Raman spectroscopy systems can vary due to system-specific factors. Figure 1 illustrates this difference by showing the Raman spectra of the same 2-nitrotoluene substance measured using different Raman spectroscopy systems. These spectra exhibit variations in baseline, additive noise, and peak intensities, while their peak positions remain consistent. Based on this characteristic, we generate the training data set to encompass the variability in baselines, additive noise, and peak intensities, while ensuring that the peak positions remain the same. By incorporating these variations into the training data set, we aim to enable our proposed method to accurately identify Raman spectra acquired from different Raman spectroscopy systems.

Figure 1.

Figure 1

Examples of 2-nitrotoluene generated by different Raman spectroscopy systems.

The data set generation process involves several steps. First, two clean spectra (positive and negative samples) without baseline and additive noise are generated to simulate library spectral data. Then, the peak intensities of the positive sample are randomly adjusted to be between 0.8 and 1.2 times the original intensities. Baselines and additive noise are added to simulate the test spectral data, which exhibit variations in baseline and noise levels from different Raman spectroscopy. Next, the simulated test and library spectra are randomly combined to form spectral data pairs. If a spectral data pair represents the same spectrum, the target label for that pair is set to 1; otherwise, it is set to 0. Figure 2 provides a visual representation of the procedure for the simulated spectral data pair.

Figure 2.

Figure 2

Procedure for the simulated spectral data pair.

It is worth mentioning that a total of 500,000 pairs of simulated spectral data were generated in the experiment, comprising 400,000 pairs for training and 100,000 pairs for validation purposes. Although this number can be easily increased, it is limited to the above number considering the time required for deep-learning training.

Deep-Learning Model

The convolutional layer offers several advantages, including the ability to retain spatial information and having fewer parameters compared to fully connected layers. Due to these advantages, convolutional layers are commonly used in constructing deep-learning networks. As the depth of the network increases, the deep-learning model can learn and extract more representative features from the training data, leading to improved identification results.

However, as the network becomes deeper, a common issue known as gradient vanishing20 may arise. The gradient vanishing problem means that the gradient gets extremely small and does not propagate back through the layers during the training process. This issue significantly hampers the learning capability of the model, leading to a diminished ability to effectively learn and make accurate predictions.

ResNet (residual network) is one of the most renowned convolutional network (CNN) architectures that address the issue of gradient vanishing through the use of shortcut connections in the residual learning framework.21 This architecture has found wide application across various research fields and has consistently demonstrated superior prediction results compared with traditional CNNs. As a result, we adopt convolutional layers with the ResNet architecture to construct a deep-learning network.

The traditional deep-learning networks are constructed by connecting adjacent layers, which limits the integration of mixed-type information. In this study, in order to enable deep-learning technology to be applied for identifying spectra by comparing unknown spectra with known spectra in the spectral data library, we constructed a deep-learning network of spectral identification based on a hybrid deep neural networks (HDNNs).22

The fundamental concept of the HDNNs is to independently process different inputs using separate branch networks. This approach enables each branch network to learn relevant features specific to its input. The learned features from each branch network are then combined into an ensemble feature that captures valuable information from the different inputs. This ensemble feature is subsequently fed into the neural network for target learning. The HDNNs structure, with its aggregation of multiple branch networks, provides exceptional flexibility and wide applicability across various applications.23,24

In our proposed network structure, we integrate the test spectral branch network and the library spectral branch network to extract features and facilitate spectral identification. The identification process involves learning and comparing the differences among the extracted features from the two branches. Through the analysis of these differences, the network can effectively discriminate and identify spectra based on their unique characteristics.

There are several differences between our network and HDNNs. First, we employ one-dimensional CNNs specifically designed for one-dimensional spectra, addressing the requirements of our application. In contrast to the HDNNs, where the learned features are combined, we subtract the extracted features at the junction of the two branches. This subtraction operation enables us to capture the differences or discrepancies between the features.

Another distinction lies in the use of convolutional layers. Both the backbone and the branch parts of our network incorporate convolutional layers. To provide flexibility in adjusting the channel size of each layer, we utilize one-dimensional convolutional layers (Conv1d) for downsampling instead of using Maxpooling. Moreover, we employ one-dimensional transposed convolutional layers (ConvTranspose1d) for upsampling in the backbone part to ensure consistency with the downsampling process.

In the final output, we did not directly incorporate a fully connected layer. Instead, we utilize a convolutional layer for downsampling before the final output layer, which helps to reduce the computational cost. Figure 3 illustrates the network structure, where “T” represents the test spectrum and “L” represents the library spectrum. Table 1 provides an architectural table that provides detailed information about the components of our proposed deep-learning network.

Figure 3.

Figure 3

Structure of the proposed model.

Table 1. Architecture of the Proposed Deep-Learning Network.

  layer repeat output parameter
  Conv1d 3 32 × 512 10,496
block ×2 Conv1d 4 48 × 256 125,760
Conv1d (shortcut) 1 72 × 128
block ×2 ConvTranspose1d 1 48 × 256 73,856
Convld 3 32 × 512
ConvTranspose1d (shortcut) 1  
  Conv1d 1 1 × 512 33
Flatten 1 512 0
Linear 1 1 513
total number of parameters 210,658

Experimental Section

In this study, the proposed deep-learning model was trained on a computer equipped with a GeForce RTX 3080 GPU. The deep-learning framework used for training was PyTorch version 1.7.0.

Raman Database

In order to demonstrate the superior identification performance of our proposed method and address the issue of spectral intensity variation between measurement instruments, we conducted experiments using spectra from 20 different substances and spectra from 10 different substances measured with different Raman spectroscopy systems.

The Raman spectral library used in the experiment contains a total of 14,033 spectra. These library spectra were measured using a Fourier transform Raman (FT-Raman) spectrometer from Thermo Fisher Scientific, which is equipped with a laser emitting at a wavelength of 1064.0 nm.

Three different Raman spectroscopy systems were used in the experiment. Instrument 1 is a Renishaw 2000 Raman microscope system (Renishaw, New Mills, U.K.) equipped with a 514.5 nm argon ion laser. Instrument 2 and Instrument 3 are both in via Inspector portable Raman systems (Delta Nu LLC, Laramie, WY) with different excitation sources. Instrument 2 is equipped with a 632.8 nm He–Ne laser, while Instrument 3 is equipped with a 785.0 nm He–Ne laser. Table 2 provides the measurement parameters and detailed specifications of the Raman spectroscopy systems mentioned above.

Table 2. Mechanical Specifications of the Four Different Instruments.

instrument spectrometer laser power (mW) excitation wavelength (nm) spectral range (cm–1) resolution (cm–1)
master FT-Raman spectrometer 400–600 1064 201–3500 1.93
I-1 Renishaw 2000 1.0 514.5 201–3500 4
I-2 in via 1.0 632.8 201–3500 1
I-3 in via 1.0 785.0 201–3500 1

Training Strategy

The loss function and optimization method used for training the deep-learning model in this study were determined through preliminary experiments. The chosen loss function is BCEWithLogitsLoss, which combines a sigmoid layer and binary cross-entropy. This function provides numerical stability and is more effective than using sigmoid and binary cross-entropy separately. The optimization method employed is the Adam algorithm, which dynamically adjusts the learning rate during training.

The learning rate for the main parameter of the deep-learning model is set to 5 × 10–4. A batch size of 500 samples is used, which determines the number of training samples processed in each iteration. The maximum number of learning epochs is set to 50, indicating the maximum number of times the entire training data set is passed through the model during training. Throughout the training process, we selected the model with minimal validation loss as the best model.

Preprocessing and Identification Procedure

Before applying our proposed method for spectral identification, a preprocessing step was performed on the Raman spectral library. First, the resolution range of all Raman spectra in the library was adjusted to 201–3500 cm–1 by resampling. Next, a Savitzky–Golay filter was applied to remove additive noise. In our experiment, the window parameter of this filter was set to 9, and the polynomial degree was set to 2.25

Among various baseline correction methods, we have selected the asymmetrically reweighted penalized least-squares method (arPLS) due to its ability to yield good and robust results.9 However, it is important to note that there exist alternative methods available, and the appropriate additive noise removal and baseline correction methods should be selected based on the spectral data and specific application requirements.

Moreover, in order to incorporate these baseline-corrected spectra into the deep-learning model, it was necessary to adjust the length of both the spectra to be identified and the spectra in the library to 512 through resampling. Additionally, the relative intensity was normalized to a range of 0 to 1 using the min–max normalization method. These preprocessing steps ensured the compatibility of the spectra with the deep-learning model.

Figure 4 represents a flowchart depicting the process of spectra identification using the deep-learning model. The shadow arrows indicate the real-time process, while the non-shadow arrows indicate the offline process. In other words, throughout the entire spectrum identification process, the addition noise removal and baseline correction preprocessing of the library spectra were carried out in advance.

Figure 4.

Figure 4

Flowchart of spectra identification using the deep-learning model.

On the other hand, the test spectra do not require these preprocessing steps. Instead, the minimum calculation that involves adjusting the spectral length by resampling and normalizing the relative intensity of spectra through the min–max normalization method would be performed in real time if it is necessary, as indicated by the shadow arrows. This real-time calculation ensures that the test spectra are appropriately adjusted in length and relative intensity for compatibility with the deep-learning model.

Results and Discussion

To demonstrate the effectiveness and superiority of our proposed method compared with other methods, we conducted the first experiment (experiment 1) using Raman spectra of 20 different substances (1,3-DNB, 2,6-DNT, 2-A-DNT, 3,4-DNT, TMETN, 4-ADNT, ADN, AN, AP, DMDNB, HMTD, HMX, HNIW, NQ, NTO, PA, PETN, RDX, Tetryl, and TNT). Each substance has a set of 50 spectra with different baselines and additive noise. Figure 5 depicts an example of a set of 50 Tetryl spectra. From the figure, it is apparent that the intensities of the spectral peaks slightly differ among the spectra within the set.

Figure 5.

Figure 5

Example of a set for 50 Tetryl spectra.

In this experiment, we selected one spectrum from each set of 50 spectra for 20 different substances. These selected spectra were preprocessed and used as the library spectra for identification. We then conducted identification experiments on a total of 1000 spectra, representing 50 spectra for each of the 20 substances, based on the generated library spectra.

The experimental results, depicted in experiment 1 of Table 3, demonstrate the successful identification of the 1000 spectra using our proposed method. The results indicate that our method effectively distinguishes between the different substances based on their spectral library.

Table 3. Results of Our Method and Other Deep-Learning Methods.

expt. 1
expt. 2
model # of classes data preprocessing data augmentation accuracy (%) instrument accuracy (%)
1D CNN6 5 raw shift & scaling & noise 100 I-1 100
1D CNN18 512 raw shift & noise & combining spectra 93.3
proposed 20 raw data simulation 100
1D CNN26 4 raw noise 95.22 I-2 100
1D CNN27 3 raw n/a 92.5
CNN19 20 raw various baselines 100
1D CNN28 72 baseline correction shift & noise 98.1 I-3 100
1D CNN29 6 baseline correction & noise reduction n/a 100
DNN30 72 baseline correction shift & noise 96.4

Table 3 presents a comparison of our proposed method to other methods based on different databases described in the literature. The results demonstrate that our proposed method achieves superior or comparable performance. It is worth noting that some of the compared methods involve preprocessing, followed by deep-learning-based identification operations. During the preprocessing stage, selecting a robust method based on the spectral data and performing parameter adjustments are necessary to ensure excellent identification results.

An important aspect of our proposed method is that it employs data simulation instead of real spectral data for data augmentation during training. This approach validates the effectiveness of using the data simulation method to generate training data sets for deep-learning model training.

Furthermore, we verified the applicability of our proposed method to spectra measured by different Raman spectroscopy systems through a second experiment (experiment 2). Figure 1 illustrates significant differences in intensity among spectra measured under different systems, with Instrument 3 exhibiting particularly weak peak intensities in the range of 400 to 500 cm–1. Many identification methods struggle to accurately identify such spectra due to these variations in intensity.

Experiment 2 of Table 3 displays the identification results for a total of 30 spectra obtained from different Raman spectroscopy systems for 10 substances (acetone, acetonitrile, benzene, cyclohexane, ethyl alcohol, ethylene glycol, hexane, nitromethane, 2-nitrotoluene, and toluene). The experiment employs a deep-learning model to compare the unknown spectra to the library spectra for identification. The identified result corresponds to the spectrum with a rank of 1 among the 14,033 members in the library. The results of the second experiment demonstrate that our proposed method accurately identifies all substances across the three different Raman spectroscopy systems.

Based on the above two experiments, we conclude that our method accurately identifies Raman spectra of different substances and is applicable to different Raman spectroscopy systems. Moreover, our method overcomes the limitations of other methods that can identify only spectra trained by the deep-learning model.

Conclusions

Our study proposed a multi-input hybrid deep-learning model trained using simulated spectral data for the identification of Raman spectra. The model operates by determining whether a test spectrum and a spectrum from the spectral data library belong to the same category. To address the challenge of identifying spectra measured from different Raman spectroscopy systems, we simulated spectra from various systems by introducing diverse baseline and additive noise as well as random modifications to peak intensities.

By generating simulation spectral data, we successfully addressed the issue of insufficient training data. Additionally, we simplified the classification task by employing binary classification, resulting in an improved model identification performance. Our method stands out by eliminating the need for complex preprocessing steps typically found in traditional target identification methods. Moreover, our proposed approach demonstrates comparable or superior performance compared with existing deep-learning-based methods for spectral data identification. Importantly, our method is capable of identifying spectra that the model has not encountered during training. Furthermore, our approach extends to the identification of spectral data obtained from different Raman spectroscopy systems.

Acknowledgments

This research was partly supported by the MSIT (Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2023-RS-2022-00156287) and the ICAN (ICT Challenge and Advanced Network of HRD) program (IITP-2023-RS-2022-00156385) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation)

The authors declare no competing financial interest.

References

  1. Izake E. L. Forensic and homeland security applications of modern portable Raman spectroscopy. Forensic Sci. Int. 2010, 202, 1–8. 10.1016/j.forsciint.2010.03.020. [DOI] [PubMed] [Google Scholar]
  2. Hwang J.; Choi N.; Park A.; Park J.-Q.; Chung J. H.; Baek S.; Cho S. G.; Baek S.-J.; Choo J. Fast and sensitive recognition of various explosive compounds using Raman spectroscopy and principal component analysis. J. Mol. Struct. 2013, 1039, 130–136. 10.1016/j.molstruc.2013.01.079. [DOI] [Google Scholar]
  3. Lawson L. S.; Rodriguez J. D. Raman barcode for counterfeit drug product detection. Anal. Chem. 2016, 88, 4706–4713. 10.1021/acs.analchem.5b04636. [DOI] [PubMed] [Google Scholar]
  4. Galeev R. R.; Semanov D. A.; Galeeva E. V.; Falaleeva T. S.; Aryslanov I. R.; Saveliev A. A.; Davletshin R. R. Peak window correlation method for drug screening using Raman spectroscopy. J. Pharm. Biomed. Anal. 2019, 163, 9–16. 10.1016/j.jpba.2018.09.041. [DOI] [PubMed] [Google Scholar]
  5. Park J.-K.; Park A.; Yang S. K.; Baek S.-J.; Hwang J.; Choo J. Raman spectrum identification based on the correlation score using the weighted segmental hit quality index. Analyst 2017, 142, 380–388. 10.1039/C6AN02315K. [DOI] [PubMed] [Google Scholar]
  6. Mozaffari M. H.; Tay L.-L. Overfitting One-Dimensional convolutional neural networks for Raman spectra identification. Spectrochim. Acta, Part A 2022, 272, 120961 10.1016/j.saa.2022.120961. [DOI] [PubMed] [Google Scholar]
  7. J N.; Wold H. A study in the analysis of stationary time series. J. R. Statist. Soc. 1939, 102, 295 10.2307/2980009. [DOI] [Google Scholar]
  8. Savitzky A.; Golay M. J. Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 1964, 36, 1627–1639. 10.1021/ac60214a047. [DOI] [Google Scholar]
  9. Baek S.-J.; Park A.; Ahn Y.-J.; Choo J. Baseline correction using asymmetrically reweighted penalized least squares smoothing. Analyst 2015, 140, 250–257. 10.1039/C4AN01061B. [DOI] [PubMed] [Google Scholar]
  10. Zhang F.; Tang X.; Tong A.; Wang B.; Wang J.; Lv Y.; Tang C.; Wang J. Baseline correction for infrared spectra using adaptive smoothness parameter penalized least squares method. Spectrosc. Lett. 2020, 53, 222–233. 10.1080/00387010.2020.1730908. [DOI] [Google Scholar]
  11. Chen T.; Son Y.; Park A.; Baek S.-J. Baseline correction using a deep-learning model combining ResNet and UNet. Analyst 2022, 147, 4285–4292. 10.1039/D2AN00868H. [DOI] [PubMed] [Google Scholar]
  12. Park J.-K.; Lee S.; Park A.; Baek S.-J. Adaptive hit-quality index for Raman spectrum identification. Anal. Chem. 2020, 92, 10291–10299. 10.1021/acs.analchem.0c00209. [DOI] [PubMed] [Google Scholar]
  13. Liu Y.; Wang Z.; Zhou Z.; Xiong T. Analysis and comparison of machine learning methods for blood identification using single-cell laser tweezer Raman spectroscopy. Spectrochim. Acta, Part A 2022, 277, 121274 10.1016/j.saa.2022.121274. [DOI] [PubMed] [Google Scholar]
  14. Maruthamuthu M. K.; Raffiee A. H.; De Oliveira D. M.; Ardekani A. M.; Verma M. S. Raman spectra-based deep learning: A tool to identify microbial contamination. MicrobiologyOpen 2020, 9, e1122 10.1002/mbo3.1122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hu J.; Zou Y.; Sun B.; Yu X.; Shang Z.; Huang J.; Jin S.; Liang P. Raman spectrum classification based on transfer learning by a convolutional neural network: Application to pesticide detection. Spectrochim. Acta, Part A 2022, 265, 120366 10.1016/j.saa.2021.120366. [DOI] [PubMed] [Google Scholar]
  16. Kazemzadeh M.; Hisey C. L.; Zargar-Shoshtari K.; Xu W.; Broderick N. G. Deep convolutional neural networks as a unified solution for Raman spectroscopy-based classification in biomedical applications. Opt. Commun. 2022, 510, 127977 10.1016/j.optcom.2022.127977. [DOI] [Google Scholar]
  17. Tian X.; Chen C.; Chen C.; Yan Z.; Wu W.; Chen F.; Chen J.; Lv X. Application of Raman spectroscopy technology based on deep learning algorithm in the rapid diagnosis of glioma. J. Raman Spectrosc. 2022, 53, 735–745. 10.1002/jrs.6302. [DOI] [Google Scholar]
  18. Liu J.; Osadchy M.; Ashton L.; Foster M.; Solomon C. J.; Gibson S. J. Deep convolutional neural networks for Raman spectrum recognition: a unified solution. Analyst 2017, 142, 4067–4074. 10.1039/C7AN01371J. [DOI] [PubMed] [Google Scholar]
  19. Liu Y.; Wu J.; Wang Y.; Dong S. Direct recognition of Raman spectra without baseline correction based on deep learning. AIP Adv. 2022, 12, 085212 10.1063/5.0100937. [DOI] [Google Scholar]
  20. Hochreiter S.; Schmidhuber J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. 10.1162/neco.1997.9.8.1735. [DOI] [PubMed] [Google Scholar]
  21. He K.; Zhang X.; Ren S.; Sun J.. Deep Residual Learning for Image Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE, 2016.
  22. Yuan Z.; Jiang Y.; Li J.; Huang H.. Hybrid-DNNs: Hybrid Deep Neural Networks for Mixed Inputs. 2020, arXiv:2005.08419. arXiv.org e-Print archive. https://arxiv.org/abs/2005.08419.
  23. Yuan Z.; Huang H.; Jiang Y.; Li J. Hybrid deep neural networks for reservoir production prediction. J. Pet. Sci. Eng. 2021, 197, 108111 10.1016/j.petrol.2020.108111. [DOI] [Google Scholar]
  24. Zang Z.; Wang W.; Song Y.; Lu L.; Li W.; Wang Y.; Zhao Y. Hybrid deep neural network scheduler for job-shop problem based on convolution two-dimensional transformation. Comput. Intell. Neurosci. 2019, 2019, 1–9. 10.1155/2019/7172842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Park A.; Baek S.-J.; Shen A.; Hu J. Detection of Alzheimer’s disease by Raman spectra of rat’s platelet with a simple feature selection. Chemom. Intell. Lab. Syst. 2013, 121, 52–56. 10.1016/j.chemolab.2012.11.011. [DOI] [Google Scholar]
  26. Lee W.; Lenferink A. T.; Otto C.; Offerhaus H. L. Classifying Raman spectra of extracellular vesicles based on convolutional neural networks for prostate cancer detection. J. Raman Spectrosc. 2020, 51, 293–300. 10.1002/jrs.5770. [DOI] [Google Scholar]
  27. Sohn W. B.; Lee S. Y.; Kim S. Single-layer multiple-kernel-based convolutional neural network for biological Raman spectral analysis. J. Raman Spectrosc. 2020, 51, 414–421. 10.1002/jrs.5804. [DOI] [Google Scholar]
  28. Zhou W.; Tang Y.; Qian Z.; Wang J.; Guo H. Deeply-recursive convolutional neural network for Raman spectra identification. RSC Adv. 2022, 12, 5053–5061. 10.1039/D1RA08804A. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Yang S.; Xie Y.; Liu J.; Zhao S.; Jin S.; Zhang D.; Chen Q.; Huang J.; Liang P. Raman spectral classification algorithm of cephalosporin based on VGGNeXt. Analyst 2022, 147, 5486–5494. 10.1039/D2AN01355J. [DOI] [PubMed] [Google Scholar]
  30. Zhang R.; Xie H.; Cai S.; Hu Y.; Liu G.-k.; Hong W.; Tian Z.-q. Transfer-learning-based Raman spectra identification. J. Raman Spectrosc. 2020, 51, 176–186. 10.1002/jrs.5750. [DOI] [Google Scholar]

Articles from ACS Omega are provided here courtesy of American Chemical Society

RESOURCES