Abstract
With the rapid development of the Internet of Things (IoT) and 5G technology, there has been a considerable increase in demand for self-powered and flexible sensors. However, existing solutions frequently prove inadequate regarding flexibility, energy efficiency, and the accuracy with which gestures can be recognized, particularly in noncontact operation scenarios. As a result, there is a need for innovative developments in sensor technology. This study proposes an artificial intelligence-based gesture recognition system comprising a triboelectric sensor ring, an Arduino signal processing module, and a deep learning module. Our approach enables the direct reading of triboelectric signals by Arduino through integrated circuits, thereby maintaining the output voltage of triboelectric signals within the input range of commonly used microcontrollers. The integration of triboelectric technology with sophisticated deep learning methodologies, notably the utilization of a one-dimensional convolutional neural network (CNN), has enabled the development of a system that exhibits an accuracy rate exceeding 95% in the recognition of 12 distinct gestures. This study demonstrates the prospective utility of triboelectric sensors in the realms of gesture recognition, wearable technology, and human–machine interaction.
1. Introduction
The advent of the Internet of Things (IoT) and the establishment of fifth-generation (5G) wireless networks has ushered in a new era of technological advancement. A growing number of wearable devices are being integrated into various aspects of daily life, including smart home devices, intelligent transportation systems, smart agriculture, and smart healthcare.1−4 For the wearer, lightweight, flexible, and long-lasting devices often provide an enhancement to the wearing experience. Nevertheless, current wearable devices require an external power source to generate signal excitation, which represents a significant barrier to their wider deployment.5−9 Lithium batteries are seen as a solution, but lithium-ion batteries also have problems such as environmental pollution and unchanged replacement.10,11 In 2012, Wang et al. introduced triboelectric nanogenerator (TENG) technology, an innovative energy conversion technology based on triboelectric charging and electrostatic induction. It can transform mechanical energy into electrical energy and has broad potential for a variety of applications. Due to its simple manufacturing process, wide range of material choices, fast dynamic response, and self-powering capability, TENG offers new possibilities for wearable devices.12−15 Research on TENG-based wearable devices is of significant academic and practical importance.
In recent years, researchers have designed a series of intelligent sensing devices based on TENG principles, which offer a new direction in energy harvesting and sensing. Xiong et al. have developed a textile TENG with high tensile strength and impressive electrical output. The maximum peak power of this device is 3.6 μw, while the average output power is 0.38 μw, which is sufficient to power portable electronic devices. The researchers employed machine learning techniques to classify and identify four distinct human movement patterns: slow and fast walking, jogging, and running.16 Cheng et al. have developed an antibacterial yarn (UV/OM-CY) exhibiting radiative cooling properties and ultraviolet (UV) resistance. This innovative yarn was integrated with polyethylene (PE) characterized by low UV transmittance. The resulting TENG fabric demonstrated UV transmittance values of 0.76% for UVA and 0.21% for UVB, achieving an exceptionally high ultraviolet protection factor (UPF) of 328. Moreover, this composite yarn woven TENG, a self-powered motion sensor, effectively detects falls during gait rehabilitation exercises.17 Yang et al. introduced a wearable triboelectric impedance tomography (TIT) system that enables noninvasive imaging of various biological tissues by utilizing impedance information derived from human soft tissues.18 Hwang et al. designed a stretchable, highly sensitive, self-powered integrated sensing system capable of accurately measuring deformations of the epidermis and muscles around the throat during breathing, coughing, drinking, and swallowing.19 Jin et al. designed an intelligent flexible robotic gripper system that employs machine learning (ML) techniques for data analysis. The system successfully demonstrated the gripper’s ability to perceive and grasp objects, as well as achieve target recognition.20 Qin et al. proposed a triboelectric pressure sensor based on water-containing triboelectric elastomers with gradient-based microchannels, addressing the challenge of achieving both high sensitivity and a wide linear range.21 Nevertheless, in comparison to existing research, there has been a paucity of investigation into the integration of finger movement with TENG devices. The ability of the human finger to create a variety of gestures allows for the conveyance of a substantial amount of information, which holds significant potential in the field of wearable devices. In numerous recent studies, sensors have been positioned at the joints of the hands.22−25 These methods prioritize the optimization of sensor performance, thereby facilitating the acquisition of more pronounced response signals. Nevertheless, from the perspective of the individual wearing the device, this solution has the potential to negatively impact the wearing experience.26−29 The objective of this research is to develop a device that integrates the lightweight, self-powered attributes of TENG sensors with a range of finger movements for human–machine interaction or health monitoring, thereby enhancing the user experience. The accuracy of ML-assisted gesture recognition has seen a notable improvement with the advent of sophisticated ML algorithms.30−34 The utilization of deep learning-assisted sensors has proven effective in capturing a substantial amount of valuable motion data.35−37 This technology has been widely employed in several fields, including human–machine interaction, the Internet of Things (IoT), and even smart diagnostics.
This work proposes a novel gesture recognition system that integrates a triboelectric sensor array (GR-TENG), an A/D conversion module, and a deep learning-based recognition model, addressing these challenges. Unlike previous studies, we introduce conductive ink to fabricate flexible electrodes for TENGs, which improves both mechanical flexibility and ease of fabrication. Additionally, we design filtering and amplification circuits to optimize the TENG output signal, making it compatible with the Arduino A/D input system—a novel approach that overcomes the typical limitations of material selection and electrode sizing in traditional systems. Furthermore, while many previous systems utilize basic machine learning models for gesture recognition, our system employs convolutional neural networks (CNNs), which are proven to significantly improve the accuracy and efficiency of gesture classification. By combining data from multiple sensors, our system achieves over 95% accuracy in recognizing 12 different gestures, which is a substantial improvement over the existing TENG-based gesture recognition system. This system offers high flexibility, scalability, and portability, making it suitable for a wide range of wearable and interactive applications. Overall, this work provides a comprehensive solution to the challenges faced by traditional TENG-based systems and demonstrates the feasibility and potential of flexible, energy-harvesting-based gesture recognition in real-world applications.
2. Results and Discussion
2.1. Outline of the Gesture Recognition System
The gesture recognition system mainly consists of three parts: signal generation by GR-TENG, information processing by Arduino, and the IoT platform, as shown in Figure 1a–c. As shown in Figure 1a, the GR-TENG sensor operates in a single-electrode mode. During finger movements, the twisting motion generates a voltage signal from the GR-TENG. Based on this working mechanism, it is possible to distinguish different finger movements. By combining the voltage information from the movements of all five fingers with deep learning (DL) technology, different gesture signals can be successfully distinguished. The proposed gesture recognition system shows great potential in the fields of smart homes, healthcare, and human–machine interaction. The workflow of the gesture recognition system is illustrated in Figure 1b. After a gesture generates a signal, due to the triboelectric effect of GR-TENG, the voltage signal produced by GR-TENG is processed by a filtering and amplification circuit, forming the input signal for the Arduino. The IoT platform performs preliminary processing of data about standard gestures, subsequently extracting relevant features, and finally trains to establish an array of standard gestures. Similarly, during the process of gesture recognition, the voltage response data is utilized to extract features, and then a neural network algorithm is employed for classification and recognition. Subsequently, an analysis of the recognition results was conducted, with the findings potentially applicable to several fields, including the development of smart homes, healthcare solutions, and the advancement of human–computer interaction. This system offers the benefits of GR-TENG’s self-powering functionality and the low power consumption typical of Arduino devices.
Figure 1.
(a) Conceptual diagram of the gesture recognition system and its potential applications in smart homes, healthcare, and human–machine interaction. (b) Structure design of GR-TENG and the logic flow of Arduino processing with the input signal. (c) IoT platform architecture for realizing the gesture recognition. (d) Flow diagram of gesture recognition.
2.2. Mechanism and Its Performance
The GR-TENG sensor operates in single-electrode mode. The working principle of GR-TENG is illustrated in Figure 2a, which operates in a single-electrode mode. PEDOT:PSS serves as the single electrode and contacts the negative triboelectric layer (Ecoflex), connecting to the ground via a wire, while human skin acts as the positive triboelectric layer. Initially, when Ecoflex and skin come into contact, triboelectric charges of equal magnitude and opposite polarity are generated on both surfaces due to the contact electrification effect. Since Ecoflex has a stronger tendency to gain electrons compared to skin, it carries a negative charge on its surface. During movement, the force exerted by the body causes the Ecoflex and skin to separate, creating a potential difference between the two charged surfaces. To balance this potential difference, positive charges gradually transfer from the ground to the electrode. When the Ecoflex and skin reach their maximum separation distance, the charge transfer in the external circuit stops, and the potential difference between the triboelectric layers reaches its maximum. Subsequently, as the distance between the skin and Ecoflex decreases during bodily movement, the positive charges on the electrode return to the ground until the triboelectric layers are fully in contact. During one contact-separation cycle, the flexible electrode exchanges charge with the ground twice, resulting in an alternating current (AC) output.
Figure 2.
Physical images of the GR-TENG motion process and output characteristics of GR-TENG. (a) TENG single-electrode mode principle. (b–e) Voltage output dependence on key parameters, including sensor area, separation frequencies, and durability test.
During the fabrication of GR-TENG, the concentration of the dip-coating solution PEDOT:PSS plays a critical role in determining the sensor’s output performance. To systematically examine the influence of the PEDOT:PSS concentration on the open-circuit voltage of GR-TENG, five distinct concentrations were selected, corresponding to mass fractions of 0.185, 0.375, 0.850, 1.800, and 2.775% (refer to Figure S1 for physical image). During performance evaluation, nylon was employed as the positive triboelectric material, with both the nylon and silicone film precisely cut to dimensions of 2 × 2 cm. The experiments were conducted at room temperature using a linear motor with a stroke length of 20 cm and an operating frequency of 2 Hz. As presented in Table 1, the open-circuit voltage peaked at 35 V when the PEDOT:PSS concentration was 0.375%. However, a further increase in concentration resulted in a decline in voltage, which eventually stabilized at 23 V. The observed trend can be attributed to the variation in the overall capacitance of GR-TENG, which exhibited a gradual increase with increasing PEDOT:PSS concentration. Beyond a critical concentration threshold, the capacitance plateaued at a stable value. Similarly, the amount of transferred charge increased with the PEDOT:PSS concentration before reaching saturation. This phenomenon arises from the constant quantity of triboelectric charge generated during each contact-separation cycle. At lower PEDOT:PSS concentrations, limited material adhesion results in the formation of fewer conductive pathways, thereby diminishing charge transport efficiency. Conversely, at higher concentrations, the saturation of conductive pathways restricts further enhancement of electrical conductivity, ultimately constraining the device’s performance.
Table 1. Relationship between PEDOT:PSS Concentration and Peak Voltage.
| PEDOT: PSS mass fraction (%) | peak voltage (V) |
|---|---|
| 0.185 | 13 |
| 0.375 | 35 |
| 0.850 | 28 |
| 1.800 | 23 |
| 2.775 | 23 |
To determine the appropriate sensor size for gesture recognition, we conducted systematic tests on the electrical output performance of the GR-TENG. First, the frequency response performance of the GR-TENG was tested, with the results shown in Figure 2b (to comply with the input requirements, only half of the voltage values were retained during testing, given that the Arduino’s input range is 0–3 V). The tests indicate that the output voltage of the triboelectric sensor increases with the frequency, with the highest tested frequency reaching 3 Hz, which meets the frequency range requirements for human gesture movements. Different motion frequencies produce different voltage amplitudes, confirming that the sensor can provide varying voltage outputs based on different motion frequencies, demonstrating the potential for gesture recognition. Moreover, the electrical output performance of GR-TENGs with different areas was subjected to systematic testing under the conditions of a working distance of 2 cm and a working frequency of 1 Hz. The results show that as the area of the GR-TENG increases, the output voltage significantly increases. The time intervals to reach peak values remain consistent at the same frequency. Comparison with frequency data indicates that higher frequencies result in shorter times to reach voltage peaks. These results further validate the potential feasibility of GR-TENG in gesture recognition applications. The energy harvesting properties of the GR-TENG are shown in Figure S2.
An Ecoflex film sensor with dimensions of 1 cm × 1 cm was selected and attached along the back of the finger joint. In this experiment, the thickness of the Ecoflex film was 1 mm. Although, in theory, an increased thickness would facilitate the transfer of a greater charge, thereby producing a more significant response, in practice, the sensor’s flexibility is compromised by the additional thickness, which in turn affects the triboelectric effect of the sensor.
2.4. Power Management
The GR-TENG array is designed to generate AC voltage signals in response to finger movements. These gesture voltage signals are input to the Arduino through the filtering and amplification circuit. The computer reads the gesture voltage signals via a USB interface. Subsequently, the received gesture voltage signals are processed using Python for peak detection and pattern recognition. It is crucial to acknowledge that TENG devices possess elevated internal resistance, manifesting in their output characteristics as high voltage and low current. The analog-to-digital converter (ADC) pins within the microcontroller unit (MCU) have an input voltage range of 0–3.3 V, and the impedance of these pins is considerably lower than that of the TENG’s internal resistance. If the output signal of the TENG is directly connected to the ADC pin, the low power output of the TENG will result in the ADC pin remaining in an undefined state. As shown in Figure 3a, changes in the input signal cannot cause a change in the pin voltage level, thereby impeding effective sensing information acquisition. To ensure that the voltage signal collected by the sensor matches the logic levels of components in the circuit, it is necessary to design a corresponding signal processing module to achieve signal modulation and processing functions.
Figure 3.
(a) Arduino floating voltage. (b) Voc outputs of the GR-TENG after processing by the charge amplifier. (c) Filter amplifier circuit for signal processing. (d, e) Filter amplifier circuit for signal processing, including sensor area, force, bending degree, and bending speed.
Part 1 is the filter circuit. The signals generated by human gesture movements are within the extremely low-frequency range, typically between 0.5 and 3 Hz. The MAX291 chip is an eighth-order low-pass switched capacitor filter with a wide adjustable cutoff frequency range. The device can filter out high-frequency noise or interference present in a signal, thereby allowing only those of a low frequency to pass through. This chip is commonly used in electronic circuits to improve signal quality. Additionally, the output voltage amplitude of the chip is determined by its supply voltage. With a ±5 V supply, the output voltage amplitude is ±4 V. Therefore, we employed the MAX291 chip to achieve voltage signal reduction and filtering, as shown in Figure 3c. After the reduction and filtering process, the training signal undergoes varying degrees of amplitude attenuation, and the output signal exhibits AC characteristics. To achieve better signal transmission and display effects on the PC, a modulation circuit needs to be designed to further process the reduced signal.
Part 2 describes the noninverting amplifier circuit. The input signal V1 is amplified by an operational amplifier (op-amp). In the linear operating region, both input terminals are considered ideal open circuits, meaning that no current flows into the positive or negative input terminals. As a result, the current at the output terminal flows entirely through resistors R1 and R2. Since the two input terminals are at the same potential, the input signal can be viewed as the output voltage, V0, which is determined by the voltage divider formed by the resistors. The output voltage V0 is expressed as
| 1 |
After rectifying the output voltage signal of the GR-TENG circuit, as shown in Figure 3b, the processed signal is then passed through a filtering and amplification circuit. This allows the original voltage, which exceeds 3 V, to be controlled within the input range of the Arduino. The GR-TENG is positioned along the index finger joint for testing. Measurements are taken starting with a bent finger as the initial action, followed by gradually straightening the finger, and recording voltage changes repeatedly, as depicted in Figure 3d,e. As the finger bends further, the voltage increases progressively. When the finger bends at different speeds, the maximum voltage achieved remains essentially the same, but the required time and the duration of the waveform differ. As the degree of bending increases, the peak voltage also rises. The filtered voltage can reach up to 0.77 V. The bending speed also affects the voltage waveform; the slower the bending, the longer the voltage duration at the same level of bending. The physical photograph of the circuit is shown in Figure S3.
2.5. Gesture Recognition Based on Deep Learning
Preprocessing the data can effectively improve the preliminary understanding of the data. First, as shown in Figure 4a, which shows images of these 12 representative gestures, we selected 12 common gestures as the subject of our study (the 12 gestures are designated by the numerals G1 to G12). Based on the raw data, we analyzed the similarity and correlation of the voltage signals for these 12 gestures. Each gesture in the data set has 100 data samples. Figure 4b provides a correlation matrix to summarize the correlation coefficients between different gestures. For correlation analysis, Figure 4c provides a matrix summarizing the correlation coefficients between each gesture and the other 11 gestures. The average correlation coefficient of the same gesture signals is greater than 0.3 (high similarity threshold), indicating a high similarity between different samples of the same gesture. In addition, the average correlation coefficient between gesture 7 and gesture 11 is also greater than 0.3, indicating a high signal similarity between gesture 7 and gesture 11. Thus, for the complex task of gesture recognition, there is an urgent need for a new analysis method. DL, a powerful technique in artificial intelligence, offers significant feasibility in achieving complex data analysis by extracting comprehensive features for precise analysis and accurate recognition.
Figure 4.
(a) Representatives among 12 gestures. (b) Twelve gestures correspond to sensor voltage waveforms. (c) Correlation coefficient matrix of signals of 12 gestures.
DL is a subset of ML. It provides an effective method for modeling signal sequences by training end-to-end neural networks that automatically learn representative features from raw collected signals. Significant achievements have been made in image, video, and speech analysis using this approach. Hidden Markov Models (HMM), Recurrent Neural Networks (RNN), and Long Short-Term Memory Networks (LSTM) are suitable for learning tasks involving complex and large data samples. Among them, one-dimensional convolutional neural networks (1D-CNN) provide a simple and feasible solution. In this work, we implemented a one-dimensional convolutional neural network for signal model extraction and data analysis. As shown in Figure 5a–c, the 1D-CNN model with 5 convolution kernels, 32 filters, and 4 convolution layers shows the best accuracy performance. The CNN consists of four convolutional layers, four pooling layers, and one fully connected layer (Figure 5b). Convolutional layers extract features from signals for better data analysis, and a fully connected layer improves the clustering performance of the proposed CNN structure. Five channels of gesture signals are used as the unsegmented input, and the pattern recognition is achieved by the optimized four-layer CNN architecture. We set the number of epochs to 25 to avoid overfitting. Twelve different nongestures were recognized with over 95% accuracy using the triboelectric signals collected from the Arduino (Figure 5e). This indicates that the trained CNN model has a high predictive ability for the targeted gestures and can be considered a high-performing solution for gesture recognition.
Figure 5.
Output characteristics of SRS-TENG. (a–c) Optimization of CNN structure parameters based on accuracy performance, including Kernel size, the number of filters, and the number of convolutional layers. (d) Final structure of CNN after optimization. (e) Confusion matrix of gesture recognition test results after deep learning.
3. Conclusions
In consideration of the expeditious advancement of innovative wearable electronic devices, there is an urgent requirement for the development of sustainable power sources. A flexible and low-cost energy sensor (GR-TENG) has been successfully developed. To meet the performance requirements for the recognition of multiple gestures, voltage signals are employed as the output signal for sensing. The substitution of PEDOT:PSS for copper as the electrode results in enhanced stretchability, rendering the device more compatible with human gesture movements. A filtering and amplifying circuit processes the input voltage signal, thereby facilitating the differentiation of gesture signals with greater precision. The voltage is stabilized at 0–3.3 V through the Arduino-integrated analog-to-digital converter, thus facilitating enhanced readability. The principal contribution of this study is the integration of TENG technology with deep learning for gesture recognition. The use of DL enables the recognition of 12 different gestures with an accuracy of over 95%. In conclusion, the developed gesture recognition model can be applied in several contexts, including smart homes, health monitoring, and human–computer interaction. This study has made a significant contribution to the field of wearable sensor devices. Further work is required to optimize the accuracy of the sensor to achieve more complex gesture recognition and to investigate the performance of deep learning algorithms in more detail.
4. Experimental and Methods
4.1. Materials and Fabrication of GR-TENG
A two-component liquid Ecoflex solution was prepared by combining solution A and solution B in a 1:1 weight ratio. Subsequently, the mixture was transferred to a Petri dish and manually stirred for 3 min to ensure thorough blending. Subsequently, the mixture was subjected to a 1 h drying process at 80 °C in an electric blast drying oven to allow for the full curing of the Ecoflex. The prepared Ecoflex was then employed in the fabrication of the final GR-TENG.
In order to satisfy the durability and stretchability requirements of the GR-TENG, a flexible and stretchable electrode was fabricated using a conventional vertical contact-separation mode via a dip-coating method. An appropriate quantity of deionized water was added to the PEDOT:PSS solution (comprising poly[3,4-vinyldioxythiophene] blended with polystyrene sulfonic acid) to prepare the dip-coating solution. The adhesive bandage was then immersed in the dipping solution in a Petri dish for 20 min to ensure complete absorption. The soaked bandage was subsequently placed in an electric blast drying oven and subjected to a drying process for 30 min until it was completely dry. Once the drying process was complete, the PEDOT:PSS had successfully adhered to the bandage, thereby functioning as an electrode. This process not only provided electrical conductivity but also fully utilized the flexibility and stretchability of the bandage, thus optimizing the material’s potential.
4.2. Characterizations and Measurements
A filtering and amplification circuit was designed to manage the circuit and correct the current signals of the TENG. An Arduino microcontroller was employed for the processing of multichannel input signals and communication with the IoT platform, thereby serving as the computing unit. A linear motor (LinMot E1100) was utilized for the control of the movement distance and frequency of the GR-TENG. The output performance of the GR-TENG was measured using an electrostatic voltmeter (Keithley 6514).
Acknowledgments
This work was supported by Major Projects of Science and Technology in Tianjin (No. 18ZXJMTG00020).
Data Availability Statement
The data generated and/or analyzed during the current study are not publicly available for legal/ethical reasons but are available from the corresponding author on reasonable request.
Supporting Information Available
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.4c10150.
(Figure S1) Physical image of the PEDOT:PSS-based conductive bandage, which was prepared by immersing an adhesive bandage in a PEDOT:PSS solution for 20 min, followed by drying in an electric blast drying oven for 30 min, ensuring that PEDOT:PSS adhered to the bandage as an electrode; (Figure S2) energy storage capability of GR-TENG, highlighting its self-powering characteristics; and (Figure S3) physical image of the four-channel filter amplifier circuit (PDF)
The authors declare no competing financial interest.
Supplementary Material
References
- Alshehri F.; Muhammad G. A Comprehensive Survey of the Internet of Things (IoT) and AI-Based Smart Healthcare. Ieee Access 2021, 9, 3660–3678. 10.1109/ACCESS.2020.3047960. [DOI] [Google Scholar]
- Choi W.; Kim J.; Lee S.; Park E. Smart home and internet of things: A bibliometric study. J. Clean. Prod. 2021, 301, 126908 10.1016/j.jclepro.2021.126908. [DOI] [Google Scholar]
- Hu Z.; Tang H.; Sun L. Design and Implementation of Intelligent Vehicle Control System Based on Internet of Things and Intelligent Transportation. Sci. Prog. 2022, 2022, 1. 10.1155/2022/6201367. [DOI] [Google Scholar]
- Van J. C. F.; Tham P. E.; Lim H. R.; Khoo K. S.; Chang J. S.; Show P. L. Integration of Internet-of-Things as sustainable smart farming technology for the rearing of black soldier fly to mitigate food waste. J. Taiwan Inst. Chem. Eng. 2022, 137, 104235 10.1016/j.jtice.2022.104235. [DOI] [Google Scholar]
- Huang J.; Zhao M.; Cai Y.; Zimniewska M.; Li D.; Wei Q. A Dual-Mode Wearable Sensor Based on Bacterial Cellulose Reinforced Hydrogels for Highly Sensitive Strain/Pressure Sensing. Adv. Electron. Mater. 2020, 6 (1), 1900934 10.1002/aelm.201900934. [DOI] [Google Scholar]
- Kashyap V.; Yin J. Y.; Xiao X.; Chen J. Bioinspired nanomaterials for wearable sensing and human-machine interfacing. Nano Research 2024, 17 (2), 445–461. 10.1007/s12274-023-5725-8. [DOI] [Google Scholar]
- Matthies D. J. C.; Woodall A.; Urban B.; Assoc Comp M.. Prototyping Smart Eyewear with Capacitive Sensing for Facial and Head Gesture Detection. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)/ACM International Symposium on Wearable Computers (ISWC); ubicomp; 2021, 476–480. [Google Scholar]
- Wu Q.; Qiao Y. C.; Guo R.; Naveed S.; Hirtz T.; Li X. S.; Fu Y. X.; Wei Y. H.; Deng G.; Yang Y.; Wu X. M.; Ren T. L. Triode-Mimicking Graphene Pressure Sensor with Positive Resistance Variation for Physiology and Motion Monitoring. ACS Nano 2020, 14 (8), 10104–10114. 10.1021/acsnano.0c03294. [DOI] [PubMed] [Google Scholar]
- Zhao B.; Dong Z.; Cong H. A wearable and fully-textile capacitive sensor based on flat-knitted spacing fabric for human motions detection. Sens. Actuators A. 2022, 340, 113558 10.1016/j.sna.2022.113558. [DOI] [Google Scholar]
- Pan Y.; Zhu Y.; Li Y.; Liu H.; Cong Y.; Li Q.; Wu M. Homonuclear transition-metal dimers embedded monolayer C2N as promising anchoring and electrocatalytic materials for lithium-sulfur battery: First-principles calculations. Appl. Surf. Sci. 2023, 610, 155507 10.1016/j.apsusc.2022.155507. [DOI] [Google Scholar]
- Xing Q.; Zhang M.; Fu Y.; Wang K. Transfer learning to estimate lithium-ion battery state of health with electrochemical impedance spectroscopy. J. Energy Storage 2025, 110, 115345 10.1016/j.est.2025.115345. [DOI] [Google Scholar]
- Guo H.; Pu X.; Chen J.; Meng Y.; Yeh M. H.; Liu G.; Tang Q.; Chen B.; Liu D.; Qi S.; Wu C.; Hu C.; Wang J.; Wang Z. L. A highly sensitive, self-powered triboelectric auditory sensor for social robotics and hearing aids. Sci. Robot. 2018, 3 (20), eaat2516 10.1126/scirobotics.aat2516. [DOI] [PubMed] [Google Scholar]
- Liu Y.; Fu Q.; Mo J.; Lu Y.; Cai C.; Luo B.; Nie S. Chemically tailored molecular surface modification of cellulose nanofibrils for manipulating the charge density of triboelectric nanogenerators. Nano Energy 2021, 89, 106369 10.1016/j.nanoen.2021.106369. [DOI] [Google Scholar]
- Wang H.; Xu L.; Bai Y.; Wang Z. L. Pumping up the charge density of a triboelectric nanogenerator by charge-shuttling. Nat. Commun. 2020, 11 (1), 4203. 10.1038/s41467-020-17891-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zi Y.; Niu S.; Wang J.; Wen Z.; Tang W.; Wang Z. L. Standards and figure-of-merits for quantifying the performance of triboelectric nanogenerators. Nat. Commun. 2015, 6, 8376. 10.1038/ncomms9376. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xiong Y.; Luo L.; Yang J.; Han J.; Liu Y.; Jiao H.; Wu S.; Cheng L.; Feng Z.; Sun J.; Wang Z. L.; Sun Q. Scalable spinning, winding, and knitting graphene textile TENG for energy harvesting and human motion recognition. Nano Energy 2023, 107, 108137 10.1016/j.nanoen.2022.108137. [DOI] [Google Scholar]
- Cheng M.; Liu X.; Li Z.; Zhao Y.; Miao X.; Yang H.; Jiang T.; Yu A.; Zhai J. Multiple textile triboelectric nanogenerators based on UV-protective, radiative cooling, and antibacterial composite yarns. Chem. Eng. J. 2023, 468, 143800 10.1016/j.cej.2023.143800. [DOI] [Google Scholar]
- Yang P.; Liu Z.; Qin S.; Hu J.; Yuan S.; Wang Z. L.; Chen X. A wearable triboelectric impedance tomography system for noninvasive and dynamic imaging of biological tissues. Sci. Adv. 2024, 10 (51), eadr9139 10.1126/sciadv.adr9139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang B. U.; Lee J. H.; Trung T. Q.; Roh E.; Kim D. I.; Kim S. W.; Lee N. E. Transparent Stretchable Self-Powered Patchable Sensor Platform with Ultrasensitive Recognition of Human Activities. ACS Nano 2015, 9 (9), 8801–8810. 10.1021/acsnano.5b01835. [DOI] [PubMed] [Google Scholar]
- Jin T.; Sun Z.; Li L.; Zhang Q.; Zhu M.; Zhang Z.; Yuan G.; Chen T.; Tian Y.; Hou X.; Lee C. Triboelectric nanogenerator sensors for soft robotics aiming at digital twin applications. Nat. Commun. 2020, 11 (1), 5381. 10.1038/s41467-020-19059-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qin S.; Yang P.; Liu Z.; Hu J.; Li N.; Ding L.; Chen X. Triboelectric sensor with ultra-wide linear range based on water-containing elastomer and ion-rich interface. Nat. Commun. 2024, 15 (1), 1. 10.1038/s41467-024-54980-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Avila F. R.; Carter R. E.; McLeod C. J.; Bruce C. J.; Giardi D.; Guliyeva G.; Forte A. J. Accuracy of Wearable Sensor Technology in Hand Goniometry: A Systematic Review. Hand-American Association for Hand Surgery 2023, 18 (2), 340–348. 10.1177/15589447211014606. [DOI] [PMC free article] [PubMed] [Google Scholar]
- He T.; Sun Z.; Shi Q.; Zhu M.; Anaya D. V.; Xu M.; Chen T.; Yuce M. R.; Thean A. V.-Y.; Lee C. Self-powered glove-based intuitive interface for diversified control applications in real/cyber space. Nano Energy 2019, 58, 641–651. 10.1016/j.nanoen.2019.01.091. [DOI] [Google Scholar]
- Liu Q.; Qian G.; Meng W.; Ai Q.; Yin C.; Fang Z. A new IMMU-based data glove for hand motion capture with optimized sensor layout. International Journal of Intelligent Robotics and Applications 2019, 3 (1), 19–32. 10.1007/s41315-019-00085-4. [DOI] [Google Scholar]
- Wu X.; Pan Y.; Li J.; Fan T.. Self-powered intelligent tactile glove and mute language gesture broadcast system, has acquisition and transmission assembly provided with multiple groups of voltage division circuit and single chip microcomputer. CN110764621-A.
- Dong B.; Yang Y.; Shi Q.; Xu S.; Sun Z.; Zhu S.; Zhang Z.; Kwong D.-L.; Zhou G.; Ang K.-W.; Lee C. Wearable Triboelectric-Human-Machine Interface (THMI) Using Robust Nanophotonic Readout. ACS Nano 2020, 14 (7), 8915–8930. 10.1021/acsnano.0c03728. [DOI] [PubMed] [Google Scholar]
- Gao S.; He T.; Zhang Z.; Ao H.; Jiang H.; Lee C. A Motion Capturing and Energy Harvesting Hybridized Lower-Limb System for Rehabilitation and Sports Applications. Adv. Sci. 2021, 8 (20), 2101834 10.1002/advs.202101834. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ghour S. A. Soft Minimal Soft Sets and Soft Prehomogeneity in Soft Topological Spaces. International Journal of Fuzzy Logic and Intelligent systems 2021, 21 (3), 269–279. 10.5391/IJFIS.2021.21.3.269. [DOI] [Google Scholar]
- He T.; Guo X.; Lee C. Flourishing energy harvesters for future body sensor network: from single to multiple energy sources. Iscience 2021, 24 (1), 101934 10.1016/j.isci.2020.101934. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dan X.; Cao X.; Wang Y.; Yang J.; Wang Z. L.; Sun Q. A Stereoscopically Structured Triboelectric nanogenerator for Bending Sensors and Hierarchical Interactive Systems. Acs Appl. Nano Mater. 2023, 3590. 10.1021/acsanm.2c05360. [DOI] [Google Scholar]
- Wen F.; Sun Z.; He T.; Shi Q.; Zhu M.; Zhang Z.; Li L.; Zhang T.; Lee C. Machine Learning Glove Using Self-Powered Conductive Superhydrophobic Triboelectric Textile for Gesture Recognition in VR/AR Applications. Adv. Sci. 2020, 7 (14), 2000261 10.1002/advs.202000261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhu J.; Ji S.; Yu J.; Shao H.; Wen H.; Zhang H.; Xia Z.; Zhang Z.; Lee C. Machine learning-augmented wearable triboelectric human-machine interface in motion identification and virtual reality. Nano Energy 2022, 103, 107766 10.1016/j.nanoen.2022.107766. [DOI] [Google Scholar]
- Zhu M.; Sun Z.; Zhang Z.; Shi Q.; He T.; Liu H.; Chen T.; Lee C. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 2020, 6 (19), eaaz8693 10.1126/sciadv.aaz8693. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pan Y.; Song J.; Wang K. Research Progress and Prospects of Liquid-Liquid Triboelectric Nanogenerators: Mechanisms, Applications, and Future Challenges. ACS Applied Electronic Materials 2025, 7 (1), 1–12. 10.1021/acsaelm.4c01729. [DOI] [Google Scholar]
- Pandey P.; Seo M. K.; Shin K. H.; Lee J.; Sohn J. I. In-situ cured gel polymer/ecoflex hierarchical structure-based stretchable and robust TENG for intelligent touch perception and biometric recognition. Chem.l Eng. J. 2024, 499, 156650 10.1016/j.cej.2024.156650. [DOI] [Google Scholar]
- Yan C.; Jiang S.; Wang Y.; Deng J.; Wang X.; Chen Z.; Chen T.; Huang H.; Wu H. A wearable sign language translation device utilizing silicone-hydrogel hybrid triboelectric sensor arrays and machine learning. Nano Energy 2024, 133, 110425 10.1016/j.nanoen.2024.110425. [DOI] [Google Scholar]
- Li Y.; Yang X.; Ding Y.; Zhang H.; Cheng Y.; Li X.; Sun J.; Liu Y.; Li Y.; Fan D. A Wireless Health Monitoring System Accomplishing Bimodal Decoupling Based on an ″IS″-Shaped Multifunctional Conductive Hydrogel. Small 2025, 2411046 10.1002/smll.202411046. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data generated and/or analyzed during the current study are not publicly available for legal/ethical reasons but are available from the corresponding author on reasonable request.






