Abstract

Electronic skins (e-skins) have seen intense research and rapid development in the past two decades. To mimic the capabilities of human skin, a multitude of flexible/stretchable sensors that detect physiological and environmental signals have been designed and integrated into functional systems. Recently, researchers have increasingly deployed machine learning and other artificial intelligence (AI) technologies to mimic the human neural system for the processing and analysis of sensory data collected by e-skins. Integrating AI has the potential to enable advanced applications in robotics, healthcare, and human–machine interfaces but also presents challenges such as data diversity and AI model robustness. In this review, we first summarize the functions and features of e-skins, followed by feature extraction of sensory data and different AI models. Next, we discuss the utilization of AI in the design of e-skin sensors and address the key topic of AI implementation in data processing and analysis of e-skins to accomplish a range of different tasks. Subsequently, we explore hardware-layer in-skin intelligence before concluding with an analysis of the challenges and opportunities in the various aspects of AI-enabled e-skins.
1. Introduction
Human skin is endowed with distributed sensory receptors and peripheral somatosensory neurons for fast signal processing asynchronously,1−5 which enables human cognition and mobile agility. Electronic skins (e-skins), as functional mimics of human skin, have been developed to possess and even surpass the capabilities of human skin.6−8 The key features of e-skins are their sensory functions combined with mechanical flexibility or stretchability. E-skins hold promise to invigorate robotics, prosthetics, medicine, healthcare, and human–machine interfaces.9−12 In robotics, both exteroceptive and proprioceptive e-skins are highly desired. These e-skins provide tactile information and motorial feedback, enabling precise control and interaction with the environment. Such capabilities are crucial for developing intelligent robots that can handle complex real-world applications such as surgical operations performed by medical robots (Figure 1). In prosthetics, e-skins can aid amputees in restoring sensory functions through prosthetic limbs that are integrated with neural interfaces.13−15 Additionally, prosthetic limbs can be enhanced with better motor control through proprioceptive feedback from e-skins. In the fields of medicine and healthcare, e-skins have the potential to revolutionize several areas. These include enabling home-based rehabilitation and chronic disease self-management systems through the monitoring of physical signs. They also facilitate surgical procedures performed by medical robots which are trained using data on patients’ physical signs and doctors’ hand movements. Moreover, e-skins are useful in fitness routines and sports training by monitoring movements and electrophysiological signals. In human–machine interfaces, e-skins can precisely capture facial expressions and body gestures in real time for realistic avatar-based experiences in meetings, games, and remote cooperation.
Figure 1.
Potential of AI in enabling e-skin designs spanning materials, sensors, sensing performance, and functionality (e.g., self-healing, biodegradability). AI can also enable e-skin applications, such as in surgical operations performed by medical robots, through the analysis of sensory data, and the output of feedback information after decision-making.
Early reviews of e-skins have thoroughly examined aspects such as materials, sensing mechanisms, device design, performance, and applications.6,7,9 In recent years, the rapid development of artificial intelligence (AI) has heavily empowered e-skins in various ways, including data processing and analysis, sensor design, and intelligent applications. Additionally, e-skins can exhibit a certain degree of intelligence through specially designed mechanisms. This ongoing trend of AI facilitating advancements in e-skins is increasingly evident. However, although certain aspects, such as the AI models used in applications, have received attention in the latest reviews,16,17 a comprehensive summary and discussion of this trend remain outstanding.
While significant progress has been made in e-skin development in recent years, there are still challenges to overcome. On one hand, the design and material research for sensors heavily rely on the expertise and labor of researchers, which hinders the efficiency of e-skin iteration. On the other hand, e-skin collects large amounts of real-world data, which is complex, nonstationary, variable, and often noisy time-series data. These data features make processing and revealing the underlying meaningful information labor- and time-intensive. Moreover, e-skins can potentially integrate multiple sensing modalities (e.g., pressure, temperature, and strain), adding an additional layer of complexity to the data processing challenge.
AI technologies have experienced unprecedented growth this decade,18,19 driven by advances in computational resources, vast amounts of available data, and innovative AI models. Given its powerful capabilities in interpretation, learning, and decision-making,20−23 AI can significantly contribute to overcoming the developmental challenges faced by e-skins at both software and hardware layers, as evidenced by a boom of AI applications in e-skins (Figure 2). At the software layer, machine learning (ML) is suitable for augmenting, analyzing, and learning large data sets to infer the trends and uncover the relationships in material, device structure, and performance. Therefore, e-skin design iteration can be streamlined, and the characterization workload can be reduced. ML is also well-suited for analyzing sensory data collected from e-skins in various applications to gain valuable insights, make predictions, and enable intelligent decision-making. At the hardware layer, by integrating a computing function into e-skin hardware, it is possible to achieve in-skin intelligence through localized data processing. This can help to reduce power consumption and data latency that result from transmitting data to and processing them in centralized processing nodes.
Figure 2.
Evolution of e-skins from prototypical to functional to intelligent.24−61 ML, machine learning; DL, deep learning; CNN, convolutional neural network; and ANN, artificial neural network. Images reproduced with permissions: “Artificial touch in hand-prosthesis.”24 Copyright 1967 Springer Nature. “Sensitive skin: infrared sensor array on robot arm.”25 Copyright 2005 John Wiley and Sons. “Flexible active-matrix e-skin.”26 Copyright (2004) National Academy of Sciences, U.S.A. “Microstructured pressure sensor.”27 Copyright 2010 Springer Nature. “Epidermal e-skin.”28 Copyright 2011 The American Association for the Advancement of Science. “Stretchable transparent e-skin.”29 Copyright 2011 Springer Nature. “Strain sensor for sound recognition using ANN.”30 Copyright 2015 Springer Nature. “Multiplexed wearable perspiration analysis.”32 Copyright 2016 Springer Nature. “Artificial fingertip for roughness discrimination using ML.”33 Copyright 2017 Elsevier. “Self-healable e-skin system.”34 Copyright 2018 Springer Nature. “Ultrafast, asynchronous multimodal tactile encoding.”35 Copyright 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. “Tactile glove for object recognition using CNN.”36 Copyright 2019 Springer Nature. “Multimodal e-skin for garbage sorting using ML.”37 Copyright 2020 The American Association for the Advancement of Science. “Flexible chip with embedded ML.”38 Copyright 2020 Springer Nature. “AR/VR haptic glove enabled by AI.” From ref (39). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. “Strain sensor for sign-to-speech translation using ML.”42 Copyright 2020 Springer Nature. “Acoustic biometric authentication using ML.” From ref (45). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/”. Reprinted with permission from AAAS. “Triboelectric e-skin for pulse pressure prediction using DL.”48 Copyright 2021 John Wiley and Sons. “E-skin for hand task recognition using meta-learning.”49 Copyright 2022 Springer Nature. “Triboelectricity for materials identification using ML.” From ref (52). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. “Synaptic transistor for in-skin learning.”55 Copyright 2022 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. “Stretchable e-skin for deformation reconstruction using AI.”56 Copyright 2023 Springer Nature. “Ultrathin memristor-based 3D space writing recognition.”57 Copyright 2023 Springer Nature.
With the rapid progress and great potential of AI, it becomes increasingly important to consider how AI can be integrated to enhance the e-skin functionality and performance. ML models used in e-skins are gradually shifting from traditional ML approaches (i.e., support vector machine,62 decision tree63) to emerging convolutional neural network64,65 and transformer.66,67 This trend is expected to continue as the gap between AI and the e-skin field narrows. However, most state-of-the-art ML models are trained on image, audio, text, and video data sets, which are inherently different from the data collected from most e-skins. Moreover, the limited availability of high-quality labeled e-skin data sets further complicates e-skin-specific ML model development. The lack of standardization in e-skin data collection and annotation hinders the creation of robust and reliable ML models.
Considering the issues and challenges above, we thus conduct a comprehensive review of the state-of-the-art integration of e-skins with AI. Our review aims to provide a deeper understanding of existing e-skins, AI, and their integration; highlight the gaps in the field; and identify future research directions toward AI for e-skins.
First, we introduce the structure and sensing mechanisms of human skin and discuss e-skin sensors that mimic and augment human skin by detecting various stimuli through different mechanisms. Next, we explore schemes for achieving in situ outputs and significant features of e-skins. Afterward, we discuss the data signatures of sensory data and data feature extraction in preparation for ML implementation, followed by the introduction of various ML models employed in recent e-skins. Then, we conduct a systematic review of the advancements in ML as applied to e-skin sensor design and processing and analysis of sensory data for intelligent applications. After the discussion at the software layer, we introduce the progress of AI technologies at the hardware layer, such as functional circuits and artificial synapses for e-skins. Finally, we identify the challenges and opportunities presented by AI-enabled e-skins from various aspects.
2. Biological Skins and E-skins
The human skin performs many important functions: protection from harmful factors in the external environment like bacteria, chemicals, and UV radiation; sensation of mechanical forces, temperature, and noxious stimuli; regulation of body temperature and moisture; and production of vitamin D. E-skins seek to mimic, augment, or surpass these functions for skin-attachable healthcare devices,68 wearable technologies,69 prosthetics,70 and robotics.71 Research on e-skins has been intensive over the past ten years, and there are many excellent and detailed reviews on e-skins;6,7,9 in this section, we aim to provide an overview of e-skins to the reader for a better understanding of their AI applications in the later sections. We will briefly discuss the main classes of sensors deployed in e-skins: tactile, temperature, chemical, electrophysiological, and optical; several output functions achieved by e-skins: thermoregulation, visual displays, and haptics; and some desired attributes of e-skins: multimodal, self-healing, imperceptible, and wireless communications (Figure 3).
Figure 3.
Overview of the sensors, outputs, and desired attributes of e-skins. Sensors: Image representing tactile was reproduced with permission from ref (72). Copyright 2020 Springer Nature under a CC BY 4.0 license. Image representing temperature was reproduced with permission from ref (73). Copyright 2022 Royal Society of Chemistry. Image representing chemical from ref (60). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. Image representing electrophysiological was reproduced with permission from ref (74). Copyright 2020 Springer Nature under CC BY 4.0 license. Image representing optical sensors was reproduced with permission from ref (75). Copyright 2021 John Wiley and Sons. Outputs: Image representing thermoregulation was reproduced with permission from ref (76). Copyright 2022 Springer Nature under CC BY 4.0 license. Image representing visual displays from ref (77). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. Image representing haptics was reproduced with permission from ref (78). Copyright 2022 Springer Nature. Features: Images representing multimodality was reproduced with permission from ref (79). Copyright 2018 Springer Nature under a CC BY 4.0 license. Image representing self-healing was reproduced with permission from ref (80). Copyright 2022 John Wiley and Sons. Image representing imperceptibility was reproduced with permission from ref (81). Copyright 2023 Copyright 2023 Elsevier. Image representing wireless communications was reproduced with permission from ref (82). Copyright 2022 American Association for the Advancement of Science. Applications: Images representing e-skins for prosthetics was reproduced with permission from ref (78). Springer Nature. Image representing wearables was reproduced with permission from ref (61). Copyright 2023 American Association for the Advancement of Science. Image representing robotics was reproduced with permission from ref (83). Copyright 2022 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science.
2.1. Sensors
The human skin endows us with the sense of touch; it can sense mechanical stimuli, temperature, chemicals, and, to a lesser extent, light, and this is achieved through an intricate system of different receptors. Free nerve endings—which detect mechanical, thermal, and noxious stimuli—were identified in the 19th century, followed by the discovery of different types of mechanoreceptors: Merkel discs, Meissner corpuscles, Ruffini endings, and Pacinian corpuscles. Further advances in neuroscience84 allowed us to better understand our fascinating and complex sense of touch.5 In 2021, the Nobel Prize in Physiology or Medicine was awarded to David Julius and Ardem Patapoutian for their work on molecular transducers for temperature and force. Julius’s work on transient receptor potential (TRP) channels85 elucidated the mechanism behind temperature and pain detection at the molecular level, where the most well-known TRPV1 ion channel responds to a range of physical and chemical stimuli like heat and capsaicin.86 Piezo1 and Piezo2 were identified by Patapoutian’s group as cellular transmembrane proteins which formed mechanically activated cation channels,87 with three piezo proteins forming a blade and beam structure around a central pore (Figure 4a).88 In low-threshold mechanoreceptors, Piezo2 was identified to be the main transduction channel, and this could be seen in the Merkel cell-neurite complex (Figure 4b).1 Mutations in the PIEZO2 gene can lead to reduced sensitivity to light touch, vibration, or even a loss of proprioception. E-skins seek to replicate these sensing abilities in electronic systems and can be designed to augment or surpass the abilities of human skin in the future.
Figure 4.
Tactile sensing. (a) Illustration of the structure of PIEZO2 proteins found in mechanosensitive ion channels in human cells. Image was reproduced with permission from ref (88). Copyright 2019 Springer Nature. (b) Schematic showing PIEZO2 ion channels expressed in Merkel cells. Image was reproduced with permission from ref (1). Copyright 2021 Springer Nature. (c) Schematic of a capacitive pressure sensor with porous microstructured dielectric. Image was reproduced with permission from ref (110). Copyright 2019 American Chemical Society. (d) Schematic showing reconnection of conductive paths for a piezoresistive sensor under compression. Image was reproduced with permission from ref (40). Copyright Copyright 2020. Published under the PNAS license. (e) Schematic of a piezoelectric sensor. Image was reproduced with permission from ref (114). Copyright 2022 John Wiley and Sons. (f) Schematic showing the operation of a triboelectric sensor in single electrode mode. Image was reproduced with permission from ref (102). Copyright 2022 John Wiley and Sons under CC BY 4.0 license.
2.1.1. Tactile
In e-skins, tactile sensation is a core function, and different mechanical stimuli–such as pressure, strain, shear force, and vibration–can be sensed via various methods: piezoresistive,89−91 piezocapacitive,92,93 piezoelectric,94,95 triboelectric,96−100 or mechanochromic.101−103 An advantage of the latter three methods is that they can be powered by the mechanical stimuli, reducing the power consumption of the e-skin.104 Numerous strain, pressure, and shear force sensors have been developed and refined to mimic the tactile sensing capabilities of human skin, and large arrays of flexible tactile sensors have been demonstrated.105−107 Other challenges are to optimize the sensor performance in terms of sensitivity, range, linearity, response time, low hysteresis, stability, and temperature invariance and to integrate the different sensors to detect complex tactile signals.
Piezoresistive and piezocapacitive sensors utilize the change in their resistance and capacitance when the sensor is deformed under strain. Typically, the sensing range of the human skin is at low pressures between 0 and 10 kPa, or medium pressures between 10 and 100 kPa.27 Soft materials are more suitable for detection at low pressure ranges, as stiff materials require higher pressures for deformation. In addition, the presence of microstructures can greatly enhance the sensitivity of the device.27,108 This was illustrated in an early work where micropyramids in the dielectric layer of an OFET27 produced a sensor which could detect pressures as low as 3 Pa and had a sensitivity of 0.55 kPa–1 in the low-pressure region, as its capacitance change under pressure was greatly improved compared to the devices with unstructured dielectric layers. The geometry and material properties of the micropyramids can be tuned to achieve the desired sensing range and sensitivity,109 and the introduction of pores into the microstructures110 can further lower the working range and improve the sensitivity to 44.5 kPa–1 (Figure 4c). Alternatively, high sensitivity can be achieved by using conductive nanoparticles near their percolation threshold,111 where a small deformation can cause a large change in the tunnelling current.
However, soft materials tend to be viscoelastic and hence exhibit hysteresis. One method to reduce hysteresis in the sensor response is to coat the micropyramids of a piezoresistive sensor with a thin metal film (Figure 4d).40 The deposited metal film was precracked with regular annular cracks to avoid random cracking during compression, and resistance decreased with applied pressure due to the reconnection of the cracks in the metal. Hysteresis was below 3% during the loading/unloading cycle, compared to 74% for a poly(3,4-ethylenedioxythiophene) polystyrenesulfonate (PEDOT:PSS) coated sensor. In addition, changes in environmental conditions like temperature can change the performance of soft resistive sensors due to the relatively large temperature coefficient of resistivity (TCR) for conductors and the coefficient of thermal expansion (CTE) for elastomers. This problem can be mitigated by using graphene as the coating layer on the microstructure;112 it has a small TCR which reduces dependence of resistance on temperature and a small and negative CTE, allowing it to mitigate the expansion of the elastomer with temperature. Thus, the thermal drift of the sensor was less than 5% within 25 to 60 °C.
Piezoelectric sensors generate voltages due to the electric dipole moments formed when mechanical stress is applied (Figure 4e), while triboelectric sensors generate voltages due to the separation of charges between two surfaces (Figure 4f). Piezoelectric sensors must be fabricated from piezoelectric ceramics (e.g., lead zirconate titanate) or polymers (e.g., poly(vinylidene fluoride) (PVDF), while triboelectric sensors do not have this material restriction but require motion between the two surfaces. This implies that triboelectric sensors are suited for measuring dynamic motion but need to be integrated with other components like transistors (tribotronics) to measure static pressure.113 Similarly, piezoelectric sensors require dynamic forces to generate power but can detect static pressure when operated in capacitive mode.114
Although individual strain, pressure, and shear sensors can be integrated into e-skins to provide complete tactile sensing, multimodal sensors can reduce the number of devices and electrical connections required. An example is the use of vertical electrodes embedded in a nickel–polymer composite foam to enable sensing of both shear and pressure,72 as the vertical electrode is more sensitive to deformation near the top surface compared to planar electrodes. Alternatively, the use of AI to classify materials and textures based on features in the sensor signal has been explored.100
2.1.2. Temperature
Temperature sensors on e-skins can enable real-time monitoring of temperature of the wearer, which is indicative of health and has applications in safety monitoring in sports and the workplace, wound healing, and patient care.115
Thermistors, based on the resistance change of the sensor with temperature, is the most common temperature sensor in e-skins due to their high sensitivity, simple operation mechanism, and fabrication process.116,117 Alternatives include thermoelectric73,118−120 and pyroelectric121 sensors which have the advantage of being self-powered or thermochromic sensors91 which give a direct visual indication, but a disadvantage is the absence of an electrical output signal which can be recorded and processed quantitatively.122 Iontronic sensing encompasses a wide range of sensing capabilities like capacitive, mechano-resistive, piezoelectric, and triboelectric;123,124 it can also be utilized in temperature sensing. An example of a soft thermistor used in e-skin is based on an organohydrogel, which exhibited increased ionic conductivity with temperature due to increased ion mobility and ion concentration (Figure 5a). Notably, it achieved a high sensitivity of 19.6%/°C and remained stretchable from a temperature range of −18 to 70 °C.125 However, it faces a common problem: strain affects the thermal response, and thus, the effect of strain needs to be compensated for.
Figure 5.
Sensing mechanisms of other stimuli in e-skins. (a) Temperature: schematic showing increased ionic conductivity of an ionic conductor with temperature. Image was reproduced with permission from ref (125). Copyright 2020 American Chemical Society. (b) Chemical: schematic of a glucose sensing patch based on reduction of H2O2 at the working electrode: (i) working electrode, (ii) counter electrode, (iii) reference electrode, (iv) iontophoretic anode, and (v) current collector. Image was reproduced with permission from ref (136). Copyright 2022 Elsevier. (c) Electrophysiological: circuit design for different channels and functions of electrically compensated electrodes. From ref (142). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. (d) Optical: schematic of an ultrathin polymer LED and organic photodetector used for photoplethysmography. Image was reproduced with permission from ref (147). Copyright 2021 Springer Nature under CC BY 4.0 license.
2.1.3. Chemical
In human skin, nociceptors are sensitive to noxious chemicals or pH changes and will send warnings in the form of pain signals; in e-skins, chemical sensing can also detect potential threats to health83 or help monitor the body’s health by measuring analytes such as glucose, ions (sodium, potassium, chloride, etc.), and lactate.126 The measurement of body sweat is noninvasive and convenient, though there are some drawbacks like possible contamination at the skin surface and lower concentrations of certain analytes like glucose compared to blood. Colorimetric,127 electrochemical,32 or hybrid128 methods are commonly used in e-skins or patches. A separate note of interest is that sweat was demonstrated to be a possible biofuel for e-skins129 when the lactate within was enzymatically oxidized to pyruvate.
A microfluidic patch was demonstrated to provide colorimetric measurement of lactate, glucose, pH, and chloride ions in sweat, as well as the sweating rate.127 A phone app could then analyze the image of the device to compare the colors against the calibration curves to calculate the concentrations of analytes. In another example, different analytes in wound exudate could be measured to monitor the healing process; a device was designed to detect five biomarkers with colorimetric sensors, and a photograph of the device could be analyzed by an AI algorithm to obtain the result.60
Electrochemical sensors can be potentiometric,130 amperometric,131 or based on transistors like the organic electrochemical transistor132−134 (OECT) or the ion sensitive field effect transistor (ISFET). An example of an ISFET uses the binding of hydrogen ions with the ion-sensitive membrane to change the bias of the gate, thereby increasing the source-drain current of the ISFET with decreasing pH.135 Reverse iontophoresis can also be used to extract analytes from interstitial fluid: an applied current across the skin surface causes electromigration, electroosmosis, and movement of both charged and neutral molecules, like glucose, to the electrodes. An amperometric electrochemical sensor for glucose was designed with a working electrode based on graphene fiber fabric and Prussian Blue (PB) transducer, modified with glucose oxidase and chitosan (Figure 5b).136 The glucose oxidase catalyzes the oxidation of glucose to produce H2O2, which then is reduced at the electrode by PB, generating a current.
2.1.4. Electrophysiological
Electrical activities within the body (heart, brain, muscles, and eyes) generate electric potential variations on the skin, and these potentials can be measured noninvasively on specific parts of the skin and analyzed to provide information for human–machine interface, as well as health diagnosis and monitoring.137,138 Electrocardiography (ECG), electroencephalography (EEG), and surface electromyography (sEMG) are common noninvasive procedures performed to assess the health and function of the heart, brain, and muscles, respectively. These measurements are done by attaching electrodes at specific locations on the skin, and epidermal e-skins have the potential to provide on-demand or continuous monitoring.137 Good adhesion and low contact impedance between the electrode and skin are desired, and Ag/AgCl gel electrodes are currently the gold standard due to their good signal quality, but they are rigid, and the gel dehydrates in long-term use, degrading the signal quality and causing irritation to the skin. In comparison, ultrathin and conformal on-skin dry electrodes have a higher conductivity and provide better comfort. Biocompatible conductors like gold,28 conductive polymers,74 carbon-based materials like graphene,139,140 or coated nanowires141 are commonly chosen to fabricate the electrodes.
In one example, a large-scale system of electrodes and interconnects could be fabricated by metal deposition on a thin 1.1 μm polyethylene terephthalate (PET) film patterned into serpentine ribbons, and Cartan transfer printing was used to laminate the tattoo-like electrodes onto the skin for ECG and sEMG measurements.142 However, due to the large area of unencapsulated interconnects in contact with skin, signal compensation was used to reduce the noise (Figure 5c). This was shown when sEMG measurements were collected by the interconnects as noise could be removed from the ECG measurements.
2.1.5. Optical
Optical sensors can be implemented in e-skins for several functions: UV exposure monitoring,143 pulse oximetry,144 and photoplethysmography (PPG).75 Overexposure to UV light can lead to sunburn and damage to the skin, and UV radiation is classified as a carcinogen to humans. In human skin, a class of light-sensitive molecules, opsins, affects melanin production and wound healing. In e-skins, optical devices could generate power,145,146 and emit or detect light, but usually are restricted to the top layer unless the e-skin could be made transparent.
PPG (where a pair of LED and detector are used to measure blood volume variations near the skin surface at specific locations, based on the detection of the reflected light intensity from the tissue) measures the heart rate and can aid in diagnosis of arrhythmias. As an example, a polymer LED and organic photodetector was fabricated on a 1.5 μm thick substrate to form an ultrathin PPG sensor (Figure 5d),147 which could be powered by an organic photovoltaic array. However, the device is susceptible to motion artifacts, as deformation of the sensor could lead to high noise in the signal due to electrical or optical fluctuations.
2.2. Outputs
Besides providing a sense of touch, human skin also performs vital functions, such as protection and thermoregulation. For e-skins, functional outputs such as thermoregulation, visual displays, and haptic feedback can be incorporated. These outputs enhance the utility of e-skins beyond that of purely providing sensory inputs, facilitating interactions between users, the system, and the environment.148
2.2.1. Thermoregulation
An important function of human skin is to help regulate body temperature through vasodilation and constriction and perspiration. By incorporating phase change materials76 or heating/cooling elements149 (making use of Joule heating or Peltier effect), e-skins can mimic this function to provide comfort to the wearer, or even surpass it by using thermal signals to convey information. An example is a wireless thermally controlled epidermal VR system which could detect temperature and apply passive cooling or active heating to achieve thermoregulation or generate thermal sensations as cues or feedback (Figure 6a).150 The device consisted of 16 modules on a flexible PCB, each with a layer of cooling hydrogel, thermal barrier, thermistor, and heater, and the total thickness was less than 3.5 mm.
Figure 6.
E-skin outputs. (a) Thermoregulation: exploded view of a thermally controlled epidermal VR system. Image was reproduced with permission from ref (150). Copyright Copyright 2023 the Author(s). Published by PNAS under CC BY-NC-ND 4.0 license. (b) Haptics: photograph of a multimodal haptic glove. Image was reproduced with permission from ref (162). Copyright 2020 John Wiley and Sons under CC BY 4.0 license. (c) Displays: photograph of skin-like healthcare patch on human hand with conformal contact and schematic layout of LED pixel. From ref (77). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS.
2.2.2. Haptics
E-skins can also serve as an interface for human–machine or human–human interactions. Besides visual feedback from displays, haptic feedback can enhance interactions by providing a sense of touch, allowing the user to feel subtle changes in forces and distinguish textures in VR/AR applications,151 human communications,152 or when remotely operating a robot.153 Mechanical haptic actuators can be based on different working mechanisms: pneumatics,154 magnets,155 dielectric elastomers,156 piezoelectrics,39 electrets,157 or motors.158 Other haptic feedback mechanisms include thermal sensations, ultrasound, or direct electrical stimulation of the user using electrodes.159,160 Rogers and co-workers developed a wireless haptic interface which could be attached onto human skin,161 and they further refined it to increase the density of actuators to a pitch of 14 mm.78 By replacement of the previous electromagnetic actuator with an eccentric rotating mass vibration motor, the actuator size and weight could be significantly reduced.
Thermal sensations can also be integrated with tactile feedback: a glove combined vibrotactile motors, thermoelectric devices, and electrotactile electrodes to convey hardness, temperature, and roughness information to the wearer (Figure 6b).162 However, the authors noted the limited realism of the sensations due to the difficulty in targeting the exact location and replicating an identical sensation and that trained users were better at identifying the sensations.
2.2.3. Displays
Although not present in human skin, visual displays on e-skins can provide real-time and on-hand information from the sensors, which aids in feedback and diagnostics. Information and instructions can also be communicated visually, enhancing communication between the wearer and other parties.163 Common technologies for these displays include alternating current electroluminescence,34,164 polymer LEDs,165 and organic LEDs.166 Researchers at Samsung and Stanford University reported a stretchable, skin-attachable organic LED display with a PPG sensor (Figure 6c).77 The total thickness of the optoelectronic components on the stretchable substrate was only around 15 μm, and stress relief layers enabled stable operation under 30% strain. The display had a resolution of 11 pixels per inch, and it was noted that the rigid processing unit and battery were located where they experienced less strain compared to the wrist region.
2.3. Features
A fundamental requirement for e-skins is flexibility or stretchability, in order to conform to the human body and accommodate its various motions.167 This has been extensively explored in works from around the early 2000s, and different strategies were developed: geometric patterning,168 “rigid islands”,169 and intrinsic stretchability.170 Geometric patterns like serpentine shapes,171 wrinkles,172 or helixes173 allow rigid materials like metals and semiconductors to accommodate strain without cracking, while the “rigid islands” approach integrates rigid components onto a stretchable substrate. This versatile approach allows widely available silicon-based components to be integrated into e-skins. However, the approach of using intrinsically stretchable materials to fabricate e-skins is gaining popularity as the fabrication process can be simplified and interface problems between the components and substrate can be reduced. The challenge is to develop new materials with good mechanical and electrical performance to be used as conductors,174−177 semiconductors,178,179 and dielectrics.180,181 There are a number of features which can greatly enhance the efficiency and effectiveness of e-skins; e-skins are desired to be multimodal, self-healing, imperceptible, energy autonomous,182−186 and biodegradable,187 for example. This section discusses the first three features in more detail.
2.3.1. Multimodal
An efficient e-skin should be able to sense multiple tactile sensations, temperature, and other stimuli simultaneously, which will enable the e-skin to function in complex environments where different types of interactions are required.188 However, there are a few challenges that have to be tackled: the different sensors should be selective toward their respective measurement targets and their signals should be able to be distinguished from each other;189−191 the need for a high density of sensors, electrodes, and interconnects on a stretchable matrix; the power source for the various sensors; and data processing and analysis for all the recorded signals.192 One option to distinguish the signals is to use data processing to decouple the signals, but this increases the computational complexity. Another option is to design orthogonal sensors that are highly selective to their intended stimuli, but this may place restrictions on other aspects like sensitivity. Also, horizontal integration of the different sensors in the same plane increases the footprint and could reduce device density; while vertical integration by stacking increases the difficulty of e-skin fabrication or reduces conformability. Separately, on a large-scale e-skin, interconnects for the different devices are typically metal patterned using photolithography or electron beam deposition, which may not be compatible with the fabrication processes of soft substrates and devices. One method to simplify the fabrication process is to use simple materials which can fulfill the different sensing requirements, for example, multimodal e-skins using an all-carbon193 or all-graphene design.194
As an example on decoupling temperature and strain measurements, the strain-insensitive charge relaxation times of an ion conductor and its temperature-insensitive capacitance could be utilized to realize a multimodal sensor (Figure 7a).195 This mechanism enables a simple device structure that could be further extended to sensing of other stimuli. An outstanding work demonstrated integration of six types of sensors onto a multilayered conformable matrix, achieving the measurement of pressure, proximity, strain, temperature, magnetic field, UV light, and humidity;79 multiple stimuli could be detected simultaneously due to the good selectivity of the respective sensors and their orthogonality. This work represents significant progress toward multimodal e-skins which can replicate and surpass the sensing functions of human skin.
Figure 7.
E-skin features. (a) Multimodal: Schematic of the sensor for multimode tactile and temperature sensing. Image was reproduced with permission from ref (195). Copyright 2020 The American Association for the Advancement of Science. (b) Self-healing: Series of optical images showing a pristine capacitor, damaged by a cut through all of the layers, and healed layers. Image was reproduced with permission from ref (203). Copyright 2023 The American Association for the Advancement of Science. (c) Imperceptible: Photographs and schematic of a resistive tactile sensor and transistor on a fingertip. Image was reproduced with permission from ref (209). Copyright 2022 John Wiley and Sons. (d) Wireless: Optical photograph of an antenna formed by copper traces with circuit components encircled within. From ref (211). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS.
2.3.2. Self-Healing
Self-healing would allow the e-skin or its components to recover from physical damage, enhancing its resilience and reducing the risk of failure.196 Self-healing materials have been widely studied,197,198 and many soft materials and devices used in e-skins have the self-healing function.199−201 Mild healing conditions and high healing efficiencies are desirable and have been achieved in several works.80,202 One challenge is to ensure alignment of the device during self-healing, which is especially difficult for thin layers. A recent work addressed this issue using two immiscible polymers, both polyureas, but one with a polydimethylsiloxane (PDMS) backbone and the other with a polypropylene glycol (PPG) backbone. The dynamic hydrogen bonds between the urea groups of polymers allowed for self-healing and adhesion at the interface, but there was no macromolecular diffusion of the polymers due to the immiscibility of the polymer backbones (Figure 7b).203 This is a promising strategy to avoid misalignment of the healed device.
2.3.3. Imperceptible
E-skins attached to the human skin are required to be biocompatible; therefore, the materials used must be carefully chosen and encapsulants are used if required. Further, the e-skin should not cause discomfort to the wearer or restrict their motion in long-term usage; thus, it is also desired to be breathable or imperceptible.204 Conformal skin electronics with good adhesion can also reduce motion artifacts.205 Someya and colleagues fabricated an array of thin-film transistors and resistive tactile sensors on a substrate, creating an ultrathin (2 μm) and imperceptible sensor applied on the human body.206 Further, they developed substrate-free and gas-permeable nanomesh conductors which could be applied on human skin without causing inflammation, based on depositing gold on a sacrificial PVA nanofibre template.207 More recently, different nanomesh transistors were fabricated and integrated into an active matrix tactile sensing array (Figure 7c).208,209 Ultrathin, breathable, and imperceptible electrodes have also been demonstrated in electrophysiological measurements210 and tactile sensing.81
2.3.4. Wireless
Computation of data has to be performed by integrated circuits (ICs) that are rigid and bulky and thus detrimental to conformability, and the large amount of data implies high processing power and energy consumption. One solution is to place the IC and its power source on parts of the body which experience less strain; another is to transfer data wirelessly, but Bluetooth or Wi-Fi communications also require IC chips, while chip-free NFC designs require the antenna to be placed at a small distance of a few centimeters. Nevertheless, wireless communications enable e-skins to link with a computer to transmit and receive data, which can realize applications such as VR/AR. The number of interconnects can also be reduced, improving the portability and footprint of e-skin circuits. A recent work demonstrated the VR applications of wireless communications between a human wearing an e-skin and teleoperated robots.211 The e-skin integrated bending sensors and vibratory actuators, with copper traces serving as the antenna (Figure 7d). Sensor measurements from the user could be sent via Bluetooth to operate the robot, and feedback from the robot was then sent to the electronic skin to provide haptic sensations to the user. Another interesting work demonstrated an ultrathin chip-less e-skin capable of strain, chemical, and UV sensing.82 The sensing was based on surface acoustic waves on piezoelectric gallium nitride (GaN) upon changes in strain, mass, or UV incidence, and an external antenna measured the change in resonant frequency of the e-skin circuit.
In this section, we reviewed the progress of the various sensors used, showcased their different useful outputs, and discussed several of their desired features. There is a wide range of sensors in the literature, each designed with their advantages, which can be utilized to develop e-skin; researchers have also presented clear demonstrations of the output functions of e-skins in thermoregulation, displays, and haptics, including applications in VR/AR and teleoperation of robots. Further, there have been improvements in e-skins, showing attributes of multimodality, self-healing, imperceptibility, and wireless communications. In the following sections on AI-enabled e-skin design and applications, we will focus on the sensing function, as it is the core function of e-skins.
3. AI Models for E-skins
3.1. Data Signatures
Frequently, e-skin sensor signals are continuous time series data. Interpreting e-skin sensor data involves understanding temporal or both spatial and temporal changes to derive meaningful insights. Interpretation of spatial and temporal changes in e-skin sensor data plays a pivotal role in unlocking the full potential of this technology across diverse domains. Incorrect interpretation of sensor data may lead to misdiagnoses, inaccurate control commands in robotic systems, or inadequate design decisions. Therefore, validating and refining interpretation methods are crucial for the reliable deployment of e-skins in real-world scenarios.
3.1.1. Temporal
Temporal changes focus on how sensor readings evolve over time. Time series analysis of e-skin data enables the observation of dynamic patterns in pressure distribution, detecting trends, cyclic behavior, or changes in response to specific activities or environmental factors. Understanding temporal changes can assist in recognizing specific gestures or movements, allowing seamless interactions between humans and machines. In some physiological applications, such as biopotentials or pulse wave velocity monitoring, temporal analysis enables the identification of periodicities, irregularities, and transient events that hold diagnostic or prognostic significance.
3.1.2. Spectral
Spectral data signatures in e-skins refer to the information obtained from the frequency domain of the sensor readings. Surface textures and material differences are usually reflected in the frequency domain of the data. Therefore, knowing the unique frequency patterns associated with different materials and textures would be critical for subsequent feature extraction or representation. By analyzing the spectrum, users can gain valuable insights into frequency characteristics of data from e-skins.
3.1.3. Spatial
In contrast to temporal and spectral signatures, spatial changes refer to the distribution of sensing elements across the sensor array and how pressure or external stimuli are distributed over the monitored surface. Analyzing the spatial distribution can reveal patterns and areas of interest where pressure or strain is concentrated or dispersed. This information is critical in various applications, such as pressure mapping for healthcare, where identifying high-pressure regions can aid in preventing pressure ulcers in bedridden patients.
Interpreting sensor data correctly is of paramount importance due to its wide-ranging applications and implications in various fields. In healthcare, accurate interpretation of artery pressure distribution data can aid in the diagnosis of circulatory issues. In robotics and prosthetics, a precise understanding of e-skin sensor data can enhance the control and dexterity of robotic limbs, leading to more natural and intuitive interactions with users.
3.2. Feature Extraction
Feature extraction is a critical component of e-skin systems, enabling them to analyze tactile sensory information effectively. Here, we explore feature extraction methods specifically tailored for time series data captured by e-skins, which are categorized into time domain features, frequency domain features, and time-frequency mixed features.
3.2.1. Time Domain Features
For e-skins, time domain features are essential for capturing temporal dynamics. Time domain features play a crucial role in capturing the relevant information from the force and deformation time series data.
-
(1)
Peak values, mean, standard deviation: Provide statistical indication of e-skin data signatures.44,52,83,212−219
-
(2)
Skewness and kurtosis: Characterize the asymmetry and peak of signal shape changes during signal events.59,220
-
(3)
Polarity of signal change: Polarity could be a promising feature for triboelectric-based material classification due to the direction of charge transfer.52,215
-
(4)
Zero crossing rate: Counting the number of times that the signal waveform changes from positive to negative or vice versa, indicating shape transitions during touch events.52,218
-
(5)
Root-mean square (RMS), signal energy: RMS denotes square root mean of the signal over a time interval and reveals the effective signal amplitude. Signal energy represents the total energy of the time series signal. In e-skins, they both can quantify the total applied force or deformation energy during tactile interactions.83,213,221
-
(6)
Entropy: Time domain entropy measures the randomness or complexity of the time series data. Higher entropy indicates a more unpredictable pattern in the force or deformation changes.213
In certain circumstances, handcrafted features may exhibit remarkable efficacy. Handcrafted features may include interval of event/peak,52,59,218,220,221 slope,59 area under curve (AUC),59,214,222 and many others. Fixed length raw signals can also be a feature vector58,223 to be fed into models, but it may not be robust due to noise or shift.
Time series data derived from e-skin sensors can often exhibit complexity arising from either a high channel count or intricate signal shapes, leading to challenges in interpretation.43,218,219,223 Consequently, extracting multiple time domain features before the initial training process and selecting features with principal component analysis (PCA) is effective if there is uncertainty about the feature effectiveness.214,224
3.2.2. Frequency Domain Features
In contrast to the time domain, frequency domain features reflect the repetitive nature of e-skin signals. In higher-frequency applications such as sound and texture sensing, the change of time domain features can be subtle and hard to differentiate among signals. Fourier transform, or more commonly short-time Fourier transform (STFT), is often utilized for transformation from the time domain to the frequency domain. It is worth noting that this approach has been empirically demonstrated as effective in many e-skin applications.44,225,226
Frequency peak amplitude and frequency distribution: Among different events or sensing objects, the dominant frequencies and their corresponding amplitudes exhibit variations. For example, sound manifests dissimilarities across different individuals and different events, showing difference in dominant frequencies.45,227 Likewise, the comprehensive analysis of frequency distribution serves as a valuable tool to elucidate the underlying correlations between the sensing object and the corresponding sensor data.228
Full-frequency spectrum, often denoted as power spectral density (PSD), represents the distribution of power across different frequency components in time series data. In e-skins, it can reveal dominant frequency patterns associated with specific tactile events or vibrations during e-skin sensor interactions.44,45,229
3.2.3. Time-Frequency Mixed Features
Since tactile interactions can involve both force variations and shape changes, time-frequency mixed features are crucial for e-skins. Prominent methods for obtaining such features include the following:
-
(1)
Spectrogram, which provides a time-varying representation of the frequency content of the time series data. In e-skins, it visually captures dynamic changes in force or shape over time and their associated frequency components.58,230
-
(2)
Mel-frequency cepstral coefficients (MFCC), which are coefficients obtained from mel-frequency scale, capturing the relationship between frequency and human auditory perception. For some e-skin applications, MFCCs can provide audible frequency-related features for audio pattern recognition.230,231
Generally, a few steps are required for implementing ML in an e-skin application, as shown in Figure 8. It is very important to identify legitimate and effective features. Sometimes feature engineering requires domain knowledge (e.g., QRS interval for ECG data)221 and requires several rounds of trials.
Figure 8.
Workflow of AI implementation in e-skin applications.
3.3. Models
When it comes to e-skin sensory data, the significance of AI models cannot be underestimated. As the e-skin sensors become complex, high-density,232 and multimodal,149,233,234 the data they produce become more sophisticated and challenging to process and analyze. AI models play a crucial role in uncovering patterns, correlations, and trends from intricate data sets to provide valuable insights.
Traditional machine learning models, while effective for simpler and smaller data sets, struggle to extract meaningful patterns and representations from the abundance of information present in multimodal data. Consequently, the quest for more sophisticated AI models, capable of handling multimodal data and large-scale data sets, has led to the rise of convolutional neural network, recurrent neural network, long short-term memory network, and finally, transformer and large language models.
3.3.1. Traditional Machine Learning Models
Traditional machine learning (ML) methods have been the basis of AI-based data analysis and pattern recognition for decades. These methods form the foundation of the broader field of ML. These are prominent traditional ML algorithms: support vector machine (SVM),235 k-nearest neighbor (kNN),236 linear discriminant analysis (LDA),237 decision tree,238 and multilayer perceptron (MLP)/artificial neural network.239
3.3.1.1. Support Vector Machine (SVM)
SVM is powerful and versatile classifier used for both binary and multiclass classification tasks. SVM works by finding an optimal hyperplane that best separates different classes in the feature space. The key idea behind SVM is to maximize the margin between the hyperplane and the nearest data points, known as support vectors. SVM is particularly effective in high-dimensional spaces and has been widely applied in e-skin applications, including object recognition,216,222,240 speech classification,228,241 and physiological measurements.217,225,230,242
3.3.1.2. k-Nearest Neighbor (kNN)
kNN is a simple, yet intuitive algorithm used for both classification and regression tasks.243 The idea behind kNN is to classify a new data point based on the majority class of its k-nearest neighbors in the feature space. The choice of k influences the decision boundary’s smoothness and affects the model’s bias–variance trade-off. kNN is a nonparametric algorithm and does not make any assumptions about the underlying data distribution, making it robust in handling complex and nonlinear relationships. Due to this robust nature, kNN is applied in many tasks, such as posture detection214,243 and gesture/motion recognition.83,224,240
3.3.1.3. Linear Discriminant Analysis (LDA)
LDA is a classic supervised dimensionality reduction and classification technique. LDA works by finding linear combinations of features that maximize the separability between different classes while minimizing the within-class scatter. It transforms the original feature space into a lower-dimensional space where data points of different classes are well-separated. LDA is often used for motion recognition218 and material recognition.52
3.3.1.4. Decision Tree
Decision trees are interpretable and easy-to-understand models that recursively split the data based on different features to make decisions. Each internal node in the tree represents a decision based on a feature, and each leaf node corresponds to a class label or numerical value. Decision trees are versatile and can handle both classification and regression tasks. However, the decision tree may suffer from overfitting. Random forest can effectively reduce overfitting compared to a single decision tree by averaging predictions. Decision tree and random forest are proven to be effective in object identification224,240 and physiological abnormal detection.59,244
3.3.1.5. Multilayer Perceptron (MLP)
MLP, also known as an artificial neural network (ANN), is a class of flexible and powerful models inspired by the structure and function of the human brain. MLP consists of multiple layers of interconnected neurons, where each neuron processes input signals and applies nonlinear activation functions. MLP is capable of approximating complex and nonlinear mappings, making them well-suited for a wide range of tasks in e-skins, including gesture/motion classification,41,213,220,245−247 surface material/texture recognition,44,215,223,248,249 force estimation,51,229,250,251 and physiological monitoring.48,221 MLP is also capable of interpolating or generating “super resolution output” from a low count of e-skin input.50,203
It is worth noting that other ML models can be considered, such as boosted tree219,225,226,252 and Gausian Mixture Model.45 As indicated in Figure 8, users are advised to train several models to determine the most suitable model based on the e-skin signal characteristics, extracted features, and task difficulties and evaluate their models with a separate test set. In order to avoid misuse of AI, some applications require specific metrics, exemplified by the requirement for model sensitivity in health screening.253
3.3.2. Deep Learning Models
Early approaches to handling multimodal data revolved around hand-crafted feature extraction techniques and simplistic fusion strategies. However, as data set sizes grew and multimodal interactions became more intricate, the limitations of these approaches became evident. Consequently, convolutional neural network (CNN) emerged as a powerful tool for image analysis, learning hierarchical features, and enabling spatial understanding.254,255 Meanwhile, recurrent neural network (RNN) and long short-term memory (LSTM) network were adept at analyzing sequential data, providing advancement in natural language understanding and time series analysis.256
3.3.2.1. Convolutional Neural Network (CNN)
CNN is a type of specialized artificial intelligence model designed to recognize patterns and features in images, while also preventing overfitting (when a model memorizes data rather than learning from it).257−259 These networks consist of four main components:
-
(1)
Learnable convolution filters: CNN uses filters with trainable weights that slide over the input. These filters automatically extract essential features from the input, such as edges and textures, and create feature maps. Since input data are typically one-dimensional (for sequential data),260 two-dimensional (for image),261 and three-dimensional (for video),262 convolutional filters can be different in dimension. Convolutional filters enable CNN to have the ability to ignore position of features and make the networks translation-invariant. Besides, multiple convolutional layers allow network to recognize higher-level features and contribute to complex data analysis.263
-
(2)
Nonlinear activations: CNN uses nonlinear activation functions to introduce complexity and capture intricate patterns in the data.264
-
(3)
Spatial coarsening: By pooling layer or stride techniques in convolution, feature size can be reduced. This step helps to reduce computational complexity while preserving the relevant information.265
-
(4)
Prediction layers: The final part of a CNN involves fully connected layers that analyze the global representation of the features and make predictions for a specific task.
CNN stands out due to the properties of automatic feature extraction. The usage of CNN in e-skins began to spike due to increased complexity of sensor design and signal interpretation.266 This approach has found widespread application due to their exceptional ability to discern intricate patterns in e-skin signals, offering a powerful framework for tasks such as surface recognition,45,68−75 physiological abnormality classification,58,60,267 speech recognition,230,231,241,267−269 and feature extraction with pretrained CNN models.267
Yet CNN requires a large amount of well-labeled data to properly train a robust model, depending on the depth of convolutional layers.259 In some e-skin applications, especially medical applications, this amount of data could be a burden to model development.270 Therefore, transfer learning with well-established models or extracting features with domain knowledge should be considered.271
3.3.2.2. Recurrent Neural Network (RNN)
Unlike other neural networks that require a fixed length of input, RNN has a unique architecture of feedback connections that allows them to maintain an internal state or memory, enabling them to retain previous information or simply “memorize” and influence the current state.256,272 This specific feature enables RNN to model temporal dependencies in sequential data.273,274 However, traditional RNN suffers from the vanishing gradient problem.275 As information propagates through many time steps, the gradients used for training can diminish or even “explode”, leading to difficulties in learning long-term dependencies, and they would be difficult to parallelize due to the sequential nature. There are several RNN variants that can effectively resolve the vanishing gradient problem:
-
(1)
Gated recurrent unit (GRU) is a variant of RNN that addresses the vanishing gradient problem. It simplifies the architecture by combining the forget and input gates into a single update gate. The reset gate is introduced to control the flow of information from previous time steps.276
-
(2)
LSTM network is a popular variant of RNN, which utilizes memory cells and gating mechanisms to control the flow of information. The input gate, forget gate, and output gate in LSTM cells allow the network to selectively store or discard information over time, thus preventing gradients from vanishing or exploding during backpropagation.277 LSTM network excels at capturing long-term dependencies in sequential data and have become the de facto replacement for many RNN applications.278
RNN and LSTM have been effective for e-skin data, for example, decoupling and restoring multiple signals from e-skin data of single channel,233 correlating lip motions with sound,279 audio recognition,280,281 and object classification.282−285
3.3.2.3. Spiking Neural Network (SNN)
Many e-skins try to mimic the fast adaptive (FA) and slow adaptive (SA) functionalities of mechanoreceptors by encoding event-based impulses from sensor tactile inputs.44,54,286,287 This approach could enable low latency and high temporal resolution, as well as low power consumption to time-critical tasks. Compared to CNN, SNN is inspired by biological neural networks which allow parallelized process of asynchronous event-driven computation.288 This biological emulation enables SNN to execute parallel processing which is ideally suited for real-time data streams and dynamic sensor inputs.289 Despite the challenges of encoding pulse signals, the adoption of SNN holds promise in e-skin applications, offering the potential for efficient, biologically inspired processing that aligns closely with the sensory capabilities of human skin, with use cases in medical screening and diagnosis.281,290
3.3.3. Transformer and Large Language Models
The introduction of transformer marked a significant paradigm shift in AI models’ ability to handle multimodal data and process vast amounts of text information. Transformer, initially designed for natural language processing tasks,291 demonstrated remarkable success in capturing contextual relationships in sequential data, enabling their extension to other modalities from audio292 to video,293,294 and even protein structure prediction.295 Through the self-attention mechanism, transformer overcomes the limitations of RNN by effectively capturing long-range dependencies and parallelizing computations, allowing them to scale up efficiently.296 Transformer is based on an encoder–decoder architecture where the encoder takes an input sequence (e.g., audio signal) and processes it using self-attention layers to create a contextualized representation of each data point in the input.294 This representation is often called an “embedding”. The decoder then uses the encoder’s output to generate an output sequence (e.g., a translation or a continuation of the text) using another set of self-attention layers and additional feed-forward layers.297,298 Conventional deep neural networks employed in e-skins can perform relatively straightforward tasks such as regression or classification on a few channels of sensor data. Transformer shows its capability in addressing more intricate challenges in e-skins, for instance, reconstructing deformation with high resolution,56 or transforming single-channel sensor input into accurate interpretations of complex finger motions.57
Large language models (LLMs) based on transformer architecture, such as ChatGPT (GPT-4), emerged as the epitome of multimodal AI with multilanguage and reasoning capabilities, due their impressive scale and architecture.299−301 Indeed, the latest LLMs exhibit an ability in multimodal fusion and transferred knowledge from different domains.302,303 This advancement holds the potential to significantly enhance the utility of e-skins requiring control with feedback. For instance, LLMs can mitigate the necessity for extensive data collection for similar items in object grasping (e.g., apple and pear grasping) and “explain” anomalies within e-skin sensor data in the context of cognitive reasoning.
Despite the advancements achieved in multimodal AI models, several challenges persist for enabling usage in e-skin applications. Training LLMs and transformers from scratch requires vast computational resources and huge amounts of annotated data, and LLMs are often huge and difficult to deploy on a small scale. Therefore, transfer learning and few-shot learning can be considered to alleviate these problems for using the transformer-based model in e-skins.
3.4. Summary
As summarized in Figure 9, implementation of AI in e-skin applications has evolved from traditional ML to deep learning and transit toward larger-scale AI models. This trend is mostly driven by increased complexity and modality of e-skin data. Presently, e-skins exhibit a relatively limited range of sensor channels and possess a lesser cognitive capacity when compared to human skin . We foresee an increase in sensor capability within a single e-skin platform. This aligns with an ongoing trend of more robust and potent large-scale AI models. We anticipate that the relationship between AI models and e-skins will further deepen, enabling e-skins to harness the advances in machine intelligence and become more versatile and capable in broader scenarios.
Figure 9.

Summary of neural networks applied in recently reported AI-enabled e-skins (refs (36, 37, 41, 49, 51, 56, 58, 213, 247−250, 267, 279, 283−285, and 304)) human senses (taste,305 hearing,306 touch,307 smell,308 and sight309), human brain,310 and outstanding models.291,311−313 As data from e-skins grew and multimodal interactions became more intricate, the limitations of MLPs became evident. CNNs emerged as powerful tools for autoanalysis. Attention-based models later demonstrated their versatility in capturing contextual relationships in sequential data, enabling more complex tasks for e-skin applications. However, there is still a noticeable gap between present AI-enabled e-skins and humans in both sensing and cognition. Images were reproduced with permission from ref (49), Copyright 2022 Springer Nature; ref (56), Copyright 2023 Springer Nature; ref (36), Copyright 2019 Springer Nature; and ref (267), Copyright 2023 Springer Nature.
4. AI-Driven E-skin Design
After the discussion of e-skins and AI models, we turn our attention to how AI enables e-skin design, more specifically, the design of sensors applied in e-skin systems. Typically, an e-skin sensor is composed of electrode layers and a sensing layer on a flexible/stretchable substrate, which are made from conductive and sensitive materials, respectively. Furthermore, functional materials can be incorporated into the design of electrode, sensing, and substrate layers to endow the sensor with functionality beyond sensing, such as self-healing, biodegradability, and breathability. Various materials and devices have been developed using different fabrication methods for diverse e-skin applications and have been characterized and reported based on different standards. It is a labor-intensive and time-consuming process to draw insights from the numerous reported materials and devices for advancing e-skin design. Besides, relying on the design intuition of human experts probably limits the design to the scope within the reach of human. Benefiting from the dramatic improvement in computation/memory power and algorithm, AI can accelerate the optimization of materials and devices for a specific application and even assist in discovering new materials and devices for an entirely new type of e-skin. AI can also guide the fabrication of e-skins by suggesting fabrication recipes, reducing the workload in design exploration and device prototyping. Moreover, AI is effective in dealing with large data sets and finding relationships in data, which can be used to fundamentally explain complex phenomenon and uncover the underlying mechanism. In this section, we discuss AI-driven materials discovery and device design for e-skin sensors. As can be seen, there are just a few reports on AI-driven e-skin design, so we will introduce each cited work in detail to show how AI is involved in the design workflow. This also implies huge promise in this field.
4.1. Materials Discovery
AI-driven materials discovery can potentially shape the design, fabrication, characterization, and fundamental study of materials in an efficient way.314−316 To this end, a feature space and a target space should be constructed to reflect a material design space; the feature space includes the information on materials in terms of composition and structure, and the target space includes materials properties. In the case of AI-driven materials fabrication, the feature space should add fabrication recipes and environmental conditions. To make AI models run efficiently, suitable descriptors are needed to describe the features and targets, which should be readable to a computer and can link to all points one by one in a material design space.314,317,318 It has been desired to design and fabricate a material with target properties, and thus, active learning or inverse design was proposed to fulfill this dream.319
Current AI-driven materials discovery is mostly at the molecular level, where electrical, optical, and electrochemical properties of molecules are targeted. For e-skin materials design, besides these basic properties of materials, there is an important consideration of mechanical properties for reliability under dynamic deformation. More specifically, conductive and sensitive materials, which are indispensable for e-skin sensors, should meet the requirements of electrical properties (e.g., conductor, semiconductor, insulator) and mechanical properties (e.g., flexibility and stretchability) for a specific application. These requirements often involve a trade-off among different properties.320 Thus, it usually calls for a design at a higher level where a design at the molecular level is not sufficient in practical problem-solving. Even so, the design at the molecular level is significant, as it can contribute to entirely new materials for e-skins. Below we will introduce some examples of AI-driven materials discovery, which are divided into three parts: conductive materials, sensitive materials, and functional materials.
4.1.1. Substrate Materials
Substrates mechanically support e-skins, calling for the basic properties of flexibility and stretchability, chemical inertness, heat stability, and weather resistance. Flexible/stretchable polymers are commonly used as substrate materials, such as polyimide, parylene, PET, silicone elastomers, polyurethane, and styrene ethylene butylene styrene block copolymer (SEBS). Innovations in polymer molecules and fabrication process hold the promise of creating highly functional substrates, for instance, the self-delaminated ultraconformal aramid nanodielectrics substrate with the thickness of around 120 nm.321 ML can accelerate the discovery and design of polymers and polymer nanocomposites with desired optical, thermal, and mechanical properties,322,323 and robot-assisted experiments can further accelerate the process and alleviate the burden on researchers.323 These may accelerate the discovery of substrate materials for e-skins.
4.1.2. Conductive Materials
In e-skins, conducting materials are extensively utilized as electrodes, which need high electrical conductivity and reliability under dynamic deformation. Generally, flexible conductive materials are realized by intrinsically flexible conductive materials (e.g., graphene, conjugated polymer, liquid metal)324−326 by incorporating a conductive guest into a flexible host (e.g., metal polymer composite, ionic gel, metal-coated film),327−329 or by endowing conductive materials with flexible structures (e.g., wrinkle, serpentine, crumple).330−332 Besides electrical conductivity and mechanical reliability, other properties should also be taken into consideration in an application scenario, such as safety, biocompatibility, and stability. For a specific application, appropriate conductive materials should be adopted, whereas a single conductive material cannot be used in all applications. AI can help to find appropriate candidates for different applications. To elucidate this, we analyze an AI-driven design process of a flexible metal polymer composite film in terms of electrical properties, namely, a silver/poly(amic acid) (Ag/PAA) composite film (Figure 10a).333 This composite was prepared by spin-coating a PAA solution to obtain a PAA film, which was then dipped into silver nitrate (AgNO3) solution for ion exchange and sodium borohydride (NaBH4) solution for reduction of Ag+ to Ag subsequently. Here, the concentration of PAA, the ion exchange time of AgNO3, the concentration of NaBH4, and the reduction time of NaBH4 directly affected the electrical properties of Ag/PAA composites. Thus, these four preparation parameters were set as features and the product of the sheet resistance and the processing time for corresponding Ag/PAA film was set as the target to train a back-propagation (BP) neural network based on the differential evolution (DE) algorithm. After training using 1077 samples and verification using 49 samples, a predictive AI model was acquired with relative errors of predictions <1.96%. Based on this model, the optimized preparation parameters of Ag/PAA composites with sheet resistance less than 1 Ω were predicted, which were used to make the electrodes of a triboelectric nanogenerator (TENG) and a capacitive pressure sensor array. In addition to electrodes, flexible conductive materials can also function as sensitive materials (discussed in section 4.1.2) for e-skin sensors. Here, the optimized preparation parameters of Ag/PAA composites with sheet resistance ranging from 10 to 100 Ω were predicted and used to make the sensing layer of a flexible resistive strain sensor since the sheet resistance is sensitive to the strain of the film in this case.
Figure 10.
AI-driven materials discovery. (a) Optimization of flexible Ag/PAA composite films for different applications (i.e., electrodes and resistive sensing materials). Image was reproduced with permission from ref.333 Copyright 2020 Royal Society of Chemistry. (b) Discovery and optimization of BaTiO3 compounds for large electrostrain via active learning in the material design space with around 605,000 compositions. Image was reproduced with permission from ref (336). Copyright 2018 John Wiley and Sons. (c) A microstructure design space of BaTiO3 nanofiller in the BaTiO3/PVA piezoelectric composite with 400 microstructures and schematics of 10 representative microstructures. This was for the theoretical analysis and optimization of nanofillers in the composites. Image was reproduced with permission from ref (337). Copyright 2022 John Wiley and Sons under CC BY-NC 4.0 license. (d) AI model combining dynamic EFM model and static CNN model to predict and understand the toughness evolution of an intrinsic self-healing polymer over time with an initial single-cut image as the input. Image was reproduced with permission from ref (340). Copyright 2022 American Chemical Society.
4.1.3. Sensitive Materials
Sensitive materials, the key to e-skin sensors, can transduce stimuli signals to electrical response outputs after connecting to electrodes.70 Besides piezoresistive materials, there are other sensitive materials such as piezocapacitive, piezoelectric, and thermosensitive materials.12,16 For each category, there are many materials, in terms of chemical composition. For each material with a fixed chemical composition, the structure can be further adjusted to enrich the material design space. Taking piezoelectric materials as an example, their mechanical–electrical energy conversion has been applied in sensors and other devices such as actuators and energy harvesters which are needed in some e-skin systems.334 Barium titanate (BaTiO3) is a family of lead-free piezoelectric ceramics with stable piezoelectric response, high electromechanical coupling efficiency, and good biocompatibility.335 Chemical substitution (i.e., doping) was a route to optimize the electrostrain properties of BaTiO3-based piezoelectric materials which was represented by (Ba1.0–x–yCaxSry)(Ti1.0–u–vZruSnv)O3 formulation (Figure 10b).336 Here, the mole fractions x, y, u, and v were rationalized in the ranges of 1.0 – x – y > 0.6, x < 0.4, y < 0.3, 1.0 – u – v > 0.6, u < 0.3, and v < 0.3. This gave rise to a material design space with around 605,000 possible compounds, and 61 compounds therein had been synthesized to form a data set with 61 experimental data points. To discover BaTiO3 with the largest electrostrain, it is ill-advised to search in this vast materials space by trial and error only. Active learning is a kind of algorithm that can cut down the experimental data points needed for training to reach the targets by recommending data points to learn from iteratively (i.e., guiding the next experiments or calculations). Thus, an active learning coupled with an optimization method was adopted to accelerate BaTiO3 discovery. During an active learning loop, 1,000 models were generated via bootstrap sampling to predict the electrostrain of BaTiO3 piezoelectrics. A design strategy balancing the exploitation and exploration guided the search for the best BaTiO3 candidate in the total materials space, recommending physical implementation to obtain a data point for the next loop: exploring where the largest uncertainties were located and exploiting where small uncertainties but large average electrostrain prediction were located. This design strategy (trade-off between exploration and exploitation) performed better than other strategies (pure exploitation, pure exploration, random selection) in every loop. At the third loop, the compound (Ba0.84Ca0.16)(Ti0.90Zr0.07Sn0.03)O3 was discovered, which possessed the largest electrostrain of 0.23% in the BaTiO3 family.
Above we show the AI-driven discovery of sensitive materials with target properties at the molecular level. For e-skins, composites are often required to combine different desired properties together. In this case, the category and microstructure of fillers in the matrix will have an impact on the composite properties, which can thus serve as a direction in the materials design. We still take piezoelectric materials as an example. Piezoelectric nanocomposites can synergize the advantages (e.g., high piezoelectric coefficient, high electromechanical coupling efficiency) of inorganic piezoelectric oxides and the advantages (e.g., high flexibility, good biocompatibility) of organic polymers, for example, BaTiO3/poly(vinyl alcohol) (PVA) composite (Figure 10c).337 Using high-throughput phase-field simulation, the morphology and spatial orientation of BaTiO3 nanofillers in the PVA matrix were simulated, followed by the calculation of the piezoelectric, mechanical, and dielectric properties of the corresponding composites. Then the simulation results of 400 microstructures were used as a data set to train a regression-based ML model to establish an analytic expression. Here, two geometric ratios (ax/az and ay/az) were set as the input features, and the composite properties (piezoelectric coefficient, dielectric permittivity, and mechanical stiffness) were set as the output targets. After optimization, the model predicted that the nanopillar filler perpendicular to the composite film plane (i.e., ax/az = 0.1 and ay/az = 0.1) produced the highest values, agreeing well with the simulation results. Furthermore, 10 categories of oxide nanofillers in PVA matrix were simulated, of which the results were used as a data set to train and verify a new regression model. This model finally contributed to an analytic expression to predict composite properties with materials constants of oxide nanofillers as inputs, which could facilitate the design and optimization of piezoelectric nanocomposites. This work also served as an example to show the role of ML in the fundamental theoretical study of materials.
4.1.4. Functional Materials
To mimic the features of human skin, functional materials like self-healable and biodegradable materials have been developed and added into substrate, electrode, and sensing layers to endow e-skins with capabilities of self-healing and biodegradability.187,199,338 Self-healable and biodegradable materials also hold the promise of meeting the increasing sustainability demands via self-healing for extended lifetimes in the case of device damage and biodegradation in the disposal process after device decommissioning, respectively. They can be applied in the formats of pure materials or composites, such as a self-healing sensor with its electrodes composed of conducting carbon black and self-healing polymer203 and a biodegradable sensor with biodegradable doped silicon nanomembrane electrodes on a biodegradable polymer substrate.339 During the use of self-healing and biodegradable materials, temporal evolutions of mechanical and electrical properties should be considered and designed to ensure that devices operate on the right track. Here, we introduce a work to show how ML helps the prediction and understanding of the temporal evolution of macroscopic mechanical properties of a self-healing polymer material (Figure 10d).340 A dynamic toughness predictive model was proposed by combining a dynamic energy functional minimization (EFM) model with a static CNN model. The EFM model was an ML model based on the combination of image data learning and the physical constraints of energy minimization. The EFM model was trained using a time series of images of the cuts made in the polymer samples regarding the polymer self-healing process to predict the changes of cut shape over time and generate a time series of cut images accordingly. The CNN model was for predicting the toughness by correlating a toughness value with a cut image. Around 100,000 frames of cut images were used to build the EFM model, and 40 cut images (34 cuts for training and 6 cuts for test) were used to build the CNN model that was trained for 1,000 epochs. This finally contributed to an AI model, which was able to predict the entire toughness trajectory in the process of polymer self-healing given an initial single-cut image. This model could also predict other macroscopic mechanical properties such as Young’s modulus and ultimate tensile strength. As claimed in the paper, ML could address the temporal evolutions of macroscopic materials properties during self-healing, which were difficult for pure experiments or computational modeling, since they relied on destructive testing. This is important not only to the design and optimization of self-healing materials and corresponding e-skin systems but also to the understanding of healing dynamics.
4.2. Device Design
To design an e-skin sensor, the first stage is the validation of the working principle followed by sensor optimization for specific applications. Typically, both involve the use of heuristic approaches which require significant effort to achieve desired results.341 Especially for sensor optimization, repetitive work is inevitable, such as materials formulation, structure adjustment, device fabrication, electrical wiring, and data collection and analysis, incurring high costs of resource, labor, and time. Even so, only limited design data could be obtained. AI can share the burden of researchers and accelerate the development of sensors. AI-driven sensor design demands massive amounts of high-quality data regarding device fabrication and performance for model training. According to the types of data used to train the AI models that are crucial to proposed design workflows, AI-driven sensor design includes design driven by virtual data and design driven by real data. It should be noted that they both aim to assess a target design before the fabrication action.
4.2.1. Sensor Design Driven by Virtual Data
E-skin sensor design driven by AI and virtual data has distinct advantages since it can not only save time and money in the experiments but also be driven by large-scale data sets. Before sensor design, the virtual data set should be generated to reflect the exerted stimuli and the sensor response. For instance, simulation or computation modeling can be used to generate the data of stimuli and response.342 In this case, the sensor performance should be quantifiable, and the effects on the sensor performance should be extractable as features. We show an example to further explain it. A stretchable capacitive e-skin sensor array was designed virtually using 3-dimensional solid mechanics and electrostatics coupling field simulation, aiming to reconstruct the high-resolution 3-dimensional deformation of a square soft robot arm (Figure 11a).56 The sensor array consisted of the 4 surfaces of the robot (dielectric silicone) and the 64 in-plane electrodes atop where 16 electrodes fell into two lines on each surface (Figure 11a(i)). Then the capacitance values of the capacitors formed by the electrode pairs in the same layer and between two adjacent layers were chosen to represent the response of the sensor array when the robot deformed. This corresponded to 392 independent capacitance readouts to reconstruct a robot gesture (i.e., a measurement frame). Given the constant permittivity of silicone for the capacitors formed on the robot, the capacitance change was primarily determined by geometric variations of the capacitors that were related to robot deformations. By virtue of this simulation, capacitance data and deformation data were generated simultaneously for each frame, and 39,334 frames in total were utilized to train a transformer deep network for robot deformation reconstruction. Sensor array design was further implemented virtually to balance the reconstruction performance (the distances from the ground truth points) with the practical fabrication complexity, where the number of readouts (resulting from electrode layouts) was optimized. The reconstruction performance showed a minimal improvement with the increase of readouts exceeding 76 (output by 32 electrodes) based on the analysis of 7,096 testing samples (Figure 11a(ii)). Therefore, a physical sensor array with 32 electrodes outputting 76 independent capacitance readouts for each frame was fabricated finally (Figure 11a(iii)), which was adopted together with the trained transformer deep network to achieve a high-resolution deformation reconstruction represented as a point cloud (Figure 11a(iv)).
Figure 11.
AI-driven sensor design. (a) Sensor design using virtual data for training. (i) Electrode layout of a simulated capacitive e-skin sensor array with 64 electrodes. (ii) The reconstruction performance of the AI model under four electrode layouts. (iii) The schematic of a physical capacitive e-skin sensor array with 32 electrodes. (iv) Deformation reconstruction process from data collection to point cloud representation. Image was reproduced with permission from ref (56). Copyright 2023 Springer Nature. (b) Sensor design using real data from experiments for training. (i) Schematics and resistance–strain profiles of strain sensors with different film composition (left) and film microstructure (right). (ii) Schematic of a navigation model to progressively explore a strain sensor design space through 12 active learning loops where 125 strain sensors were stepwise fabricated to input data for training. Image was reproduced with permission from ref (53). Copyright 2022 Spinger Nature.
4.2.2. Sensor Design Driven by Real Data
E-skin sensor design driven by AI and real data is adopted when there are manifold design factors affecting a sensor (e.g., materials composition, materials structure, and device structure). In this case, it is difficult to simulate the sensor response, with these factors considered in the virtual environment. However, it is challenging to acquire massive high-quality real data due to the high cost of time and money in device fabrication and characterization. It is desirable to develop a sensor design approach based on a small real data set. In addition, data scarcity can result in less diversity, decreasing the accuracy of AI models. Therefore, it is expected to increase data points on the premise of low cost, for instance, generating virtual data based on real data. Here we introduce an example of sensor design driven by real data followed by data augmentation to hybrid (virtual and real) data (Figure 11b).53 It is a class of resistive strain sensors for soft machines that were made from composite films with tunable composition, thickness, and microstructure placed on a VHB substrate (Figure 11b(i)). Two kinds of nanomaterials (single-wall carbon nanotubes (SWNT) and MXene nanosheets) and one kind of polymer (PVA) were used to fabricate the composite films where the loading of each component was tuned to realize different sensing characteristics. The film thickness (from hundreds of nanometers to several micrometers) and microstructure (planar, wrinkled, crumpled) were also tunable. For the sensing characteristics, 4 strain labels reflecting device performance were used to represent the sensor response, and they were initial strain (ε0), strain at a gauge factor (GF) of 10 (ε10), strain at a GF of 100 (ε100), and ultimate strain (εmax). The 4 strain labels together with 4 recipe labels reflecting device fabrication (SWNT loading, PVA loading, film thickness, and film microstructure) were set as one input–output data point. Here, 125 strain sensors were fabricated, and the corresponding 125 data points were fed into an AI model stepwise as suggested by the model under 12 active learning loops (Figure 11b(ii)). This trained AI model functioned as the basis of AI-enabled strain sensor prediction, and a preliminary prediction model was obtained at first. Then around 10,000 data points were generated virtually based on the 125 real data points using a user input principle (UIP), which were adopted together with genetic algorithm selection to construct and optimize an ultimate sensor prediction model. It should be noted that the UIP method was implemented according to physical principles suggested by expert users. This signifies that expert experience and intelligence played an important role in the realization of the proposed prediction model. This model was targeted at predicting device performance for a given fabrication recipe and recommending the fabrication recipe for a customized request of device performance, which was termed as two-way automatic strain sensor design by the authors.
5. AI-Empowered E-skin Applications
Essentially, e-skins serve as sensors that capture and convert external diverse stimuli into electronic signals.6 Its mechanical flexibility, akin to human skin, allows for versatile deployment in various application scenarios, offering soft or flexible interfaces for “interaction” and “monitoring” with digitizing options. E-skin has demonstrated remarkable versatility, finding widespread applications in fields such as health monitoring, robotics, human–machine interfaces, virtual reality/augmented reality, bionic prosthetics, etc. For instance, pressure-sensitive e-skins can monitor human body pulses and pressure distribution, strain-sensitive e-skins can track gestures and facial expressions, while nanogenerator-based e-skins can capture sound signals and discern textures, etc.
However, a closer examination of these applications reveals their functions are limited to recording and monitoring, and there was not much intelligence in them. To understand the physical significance of signals obtained by e-skins and their practical implications in various application scenarios, thorough data processing and analysis are essential. These may involve various essential tasks, including calibration, denoising, resampling, signal filtering, temporal or frequency domain analysis, and more. While specialized data processing and analysis software can aid in performing these tasks based on fixed predefined conditions, they still encounter challenges such as being time-consuming, complex, and even subjective in certain cases. Moreover, for highly complex sensory data, manual processing and analysis prove to be unreliable. For instance, dealing with a large number of measured pulse waveforms, manually calibrating them, and identifying features for disease diagnosis pose significant challenges and inefficiencies. Additionally, when referring to “intelligent” tasks such as recognizing handwriting through strain signals acquired by e-skin attached to fingers or wrists, manual accomplishment becomes entirely infeasible.
In contrast, tasks performed by human skin do not require such slow data processing and analysis. For instance, humans can swiftly discern the texture of an object by touching it with their hands. In this process, receptors beneath the skin collect mechanical and thermal signals from the object’s surface, which are then transmitted to the cerebral cortex through neural fibers. The brain processes, integrates, and comprehends these signals, leveraging experiences and knowledge to rapidly recognize and distinguish the distinctive features of the object. In this process, the human brain plays a pivotal role in converting seemingly meaningless tactile signals into meaningful information about the texture of an object, demonstrating what we refer to as “intelligence”. Clearly, conventional e-skin data processing and analyzing methods are inadequate and constrained in handling higher-order complex tasks. There exists an urgent demand for an “artificial brain” capable of performing such functions, and AI serves as an ideal candidate for fulfilling this role.
Recently, researchers have increasingly directed their efforts toward leveraging AI technology to enhance the potential functionalities of e-skins, aiming to achieve intelligent data processing and analyzing mechanisms akin to those exhibited by humans. The sensory data collected by e-skins in real-world applications contain rich information. AI algorithms can efficiently analyze large amounts of data to identify both direct and hidden correlations between raw data and meaningful information. These algorithms are capable of detecting patterns that would be difficult or impossible for humans to find. This ability to find correlations can be applied to a variety of tasks, such as classification, assessment, prediction, etc. The integration of AI with e-skins facilitates object recognition, environmental sensing, decision-making support, exercise analysis, health status assessment, and other functionalities, significantly advancing the capabilities and expanding the application scenarios of e-skins.
This section explores the role of AI in empowering e-skin applications, particularly concerning different types of e-skin sensors, including pressure sensors, strain sensors, temperature sensors, nanogenerator sensors, multimodal sensors, and others. We will delve into how AI algorithms can be integrated with e-skins and how this integration can propel the capabilities of e-skins to new heights.
5.1. AI-Empowered Pressure-Sensitive E-skins
Pressure perception is at the core of human’s tactile senses, and consequently, pressure sensing is also a fundamental functionality of e-skins. Since the concept of e-skins was introduced, a significant portion of researchers’ efforts has been focused on creating flexible or stretchable pressure sensors to establish tactile senses for e-skins.
Researchers have extensively demonstrated the use of e-skin pressure sensors in various applications, such as pulse monitoring, pressure distribution mapping, texture recognition, gesture recognition, etc. As mentioned earlier, these applications often require complex manual data processing and analysis, and a significant amount of valuable information is lost during this process, thus limiting the application of e-skins to simple tasks and small data volumes. However, AI algorithms can effectively compensate for these shortcomings, thereby significantly unleashing the application potential of e-skins and driving the development of numerous novel and powerful applications.
5.1.1. Texture Recognition
Pressure-sensitive e-skin is widely used for texture recognition through the analysis of the frequency of frictional vibrations when the sensor slides on the texture.343,344 While this approach proves effective, it does have certain limitations when analyzing data by humans: (i) It requires a fixed friction speed. (ii) It performs well only on periodic surface textures. (iii) The resolution of vibration frequency is limited, making it challenging to differentiate similar textures. (iv) Signal analysis becomes exceptionally difficult for complex nonperiodic textures. (v) Friction may potentially damage the surface of both the object being detected and the e-skin.
By harnessing AI, these limitations can be significantly addressed, elevating the capabilities to a higher level.234,252,345−347 Yao et al. developed a microstructured pressure sensor by adopting the environmental stability of the composite of graphene/graphene oxide. The sensor exhibited fast and reliable vibration response for texture detection.112 Using the kNN algorithm to classify the frequency composition of textures, they achieved an impressive 97% accuracy under fixed speed conditions. Surprisingly, the algorithm also enabled a high level of accuracy of 85% for the recognition of 5 samples using casual human finger sliding with the sensor attached to the fingertip. However, it is difficult to use traditional analysis methods due to a lack of clues, while AI may find the hidden connection between human sliding habits (speed) and vibration frequency, thereby achieving high-accuracy texture recognition.
Alternative mechanisms for texture recognition40,348 are also enabled by e-skin. For example, Yao et al. further developed a novel texture recognition method utilizing ultralow hysteresis and high-density dynamic tactile mapping, with the aid of CNN (Figure 12a).40 They employed a 10 × 10 sensing matrix within a 1 cm2 area (TRACE-FA) to interact with the surfaces of various textures through single-contact (one-touch) action. During the touch–detouch process, pressure mapping images of the contact profiles were dynamically collected using an event-based sampling system. The combination of low hysteresis, high density, and high sensitivity performance allows for precise recording of tactile image details throughout the process. As a result, this method achieved an impressive classification accuracy of 94.3% with low standard deviation of 5.3%, which is approximately 26% higher than that of a sensor matrix with high hysteresis (PEDOT:PSS-FA).
Figure 12.
AI-empowered pressure-sensitive e-skin applications. (a) High-accuracy texture recognition using CNN through simple one-touch with a crack-based high-density, low-hysteresis pressure sensor array. Image was reproduced with permission from ref (40). Copyright 2020. Published under the PNAS license. (b) Intelligent disease diagnostic system based on multidimensional feature extraction of pulse pressure signals using random forest classifier. Image was reproduced with permission from ref (59). Copyright 2023 The Authors. Published by American Chemical Society. (c) Object recognition using CNN with a glove equipped with 548 pressure sensors covering the entire hand. Image was reproduced with permission from ref (36). Copyright 2019 Springer Nature. (d) Super-resolution tactile e-skin assisted by MLP. Image was reproduced with permission from ref (50). Copyright 2022 The American Association for the Advancement of Science.
5.1.2. Health Monitoring
Pulse monitoring is a common application of pressure-sensitive e-skin, resembling pulse diagnosis in traditional Chinese medicine. However, the conventional use of pulse pressure sensing is limited to monitoring, and further analysis faces challenges such as the complexity of data, inconsistencies between sensors, and the lack of standardized protocols. These limitations hinder comprehensive analysis and lead to the waste of rich valuable information embedded in the pulse waveform. The use of AI can partially address such limitations. For instance, Liu et al. introduced a 27-channel (3 × 9) high-density sensor array designed for pulse monitoring and developed two intelligent algorithms to extract spatial and temporal distribution information from pulses for disease diagnosis (Figure 12b).59 In traditional Chinese medicine research, it is revealed that the pulse profiles at different locations correlate with the health of the human body.349 To capture comprehensive pulse information, the authors employed three groups of nine channels to monitor pulses at the positions of Cun, Guan, and Chi. Direct analysis of such a large amount of data from the pressure-sensing array for health status estimation would be challenging. However, with the assistance of AI algorithms, they efficiently extracted six-dimensional features of the pulse length, width, depth, intensity, rate, and rhythm. Utilizing these extracted features, they successfully recognized pulses from 9 volunteers with a high accuracy of 97.8%, indicating the potential for disease diagnosis. This advancement can significantly benefit patients who lack professional knowledge in evaluating their bodies’ health status. However, it is important to note that the study did not proceed further to demonstrate the actual use of AI in interpreting pulses for real disease diagnosis, which necessitates additional clinical research. Moreover, a pulse-monitoring pressure-sensitive e-skin array integrated with AI can be used to estimate systolic and diastolic blood pressure using models of random forest regression, gradient boosting regression, and adaptive boosting regression.350 Besides pulse monitoring, a flexible pressure sensor can be applied in objective assessment of motor disorders (e.g., spasticity assessment),351,352 where ML (specifically a multitask neural network) was used to mitigate noise artifacts.352
5.1.3. Pressure Matrix Interpretation
Pressure matrices play a pivotal role in human–environment interactions, offering crucial insights into the distribution, location, magnitude, and timing of the pressure. These insights hold significant potential for applications such as object recognition and human activity detection. While past research has often focused on highlighting response signals from a limited set of typical objects and actions, such an approach lacks practical utility. The integration of AI can systematically improve such situations.
In two-dimensional distributed pressure sensor arrays, AI demonstrates its proficiency in converting pressure mappings into meaningful data, encompassing tasks such as pattern reading, handwriting recognition, and object recognition. The way to do this is basically to train AI models with labeled data sets of special pressure mappings, handwriting traces, and contact pressure profiles. The models learn to identify correlations between these input patterns and their corresponding labels. An illustrative example is the work by Bae et al., who introduced a crosstalk-free mesh-based tactile sensor array utilizing CNT-PDMS for braille reading.353 Through the application of SVM classification, this sensor array achieved braille recognition with an accuracy of above 80%. The observed marginally suboptimal accuracy could be attributed to the sensor array’s limited spatial resolution (cell size: 1.3 mm × 1.3 mm) and the lack of tactile image preprocessing. Further strides in accuracy were realized by Pang et al., who devised a high-resolution hydroplastic foaming technique, resulting in a graphene aerogel tactile microarray with individual pixels at approximately 300 μm.354 Employing a trained CNN model, this microarray attains ∼80% accuracy in recognizing small letters, surpassing human recognition rates (averaging around 30% across 80 individuals). Notably, the application of stride-enlarged techniques for tactile image preprocessing (64 times for 26 letters) elevates recognition accuracy to nearly 100%, with spatial precision reaching approximately 100 μm. Similarly, object recognition can be accomplished even with a basic pressure sensor array when coupled with AI techniques. Li et al. introduced MXene-based organohydrogels to engineer a pressure sensor array capable of distinguishing among five different objects using a DNN model, yielding an impressive accuracy of 97.54%.355
The inherent flexibility and stretchability of e-skins bestow it with the unique capability to be seamlessly deployed on three-dimensional curved surfaces, such as gloves, grippers, chairs, and various surfaces that engage in human interaction. This allows AI to extensively extend the application capabilities of e-skins in many ways.356,216,47,357−359,248,36,360 Bai et al. proposed a graded interlocks-based iontronic pressure sensor with high sensitivity and broad sensing range.216 Applied to a pneumatic gripper’s jaw, the sensor records signals associated with grasping objects of varying weights. Leveraging ML algorithms, the sensor remarkably attained an accuracy of over 88% in recognizing object weight. Sinha et al. demonstrated crosstalk-free multiplexing touch-sensitive fingertip sensors using piezoresistive films.357 The sensors are integrated into a smart glove and combined with CNN and LSTM networks, enabling real-time recognition of objects (accuracy, 95.9%) and the direction of touch (accuracy, 97.8%). Cicek et al. harnessed iontronic pressure sensors combined with ML algorithms for object recognition in a smart glove.358 Classification accuracy of around 90% was achieved using Random Forest, Decision Tree, SVM, and k-NN classifiers with a data set size of 11 objects and limited number of sensors (8 sensors). Liu et al. developed a bimodal tactile–olfactory sensing array inspired by the star-nose mole and combined it with a bioinspired ML model called BOT.248 Olfactory sensing is involved because odor is a unique characteristic of objects that carries valuable information. The model consisted of three neural networks that mimicked the tactile and olfactory signal fusion hierarchy in the mole brain. The sensing system achieved robust object recognition in nonvisual environments, with an accuracy of 96.9% for 11 objects in a simulated rescue scenario.
Moreover, AI can make high-complexity tasks feasible. Sundaram et al. developed a tactile glove with 548 piezoresistive pressure sensors distributed over the hand.36 This glove facilitated the study of the mechanics of how humans grasp objects and the role of tactile information in human grasp (Figure 12c). They built a large-scale tactile data set with 135,000 frames of pressure mapping of a hand interacting with 26 different objects. Then deep CNN and ResNet-18 based architecture were used to find the key correspondences, i.e., tactile signatures, between different regions of a human hand while manipulating objects to identify individual objects and estimate their weight. Further, the glove was used to predict the 3-dimensional locations of both the hand and the object purely from touch data through a dynamic prediction model with a combination of a predictive model and contrastive learning module.360 The method can estimate the uncertainty of the prediction and generalize it to unseen objects.
AI also holds the capability to extract a wealth of valuable information from the data collected by e-skins when strategically positioned on various locations in the human body, leading to advancements in human–machine interfaces. For instance, Chen et al. presented a vibration-sensitive pressure sensor based on a molybdenum disulfide/hydroxyethyl cellulose/polyurethane sponge pressure sensor.228 By attachment to the throat, the sensor was able to detect muscle movement signals in the neck when different words are spoken. With the SVM model, the sensor can predict the spoken words with an accuracy of 97.14% in recognizing seven words; in this way, it has the potential to help individuals with aphonia due to laryngeal disorders. Chen et al. developed a braided electronic cord-based capacitive pressure sensor strategy combined with a multifeature fusion algorithm for gesture recognition.213 The cord can be flexibly and imperceptibly deployed at different locations of the body, such as braided into hair, integrated into clothes, or used as a smart hand catenary. They used an LSTM network as the AI model and trained it on a data set of interactive action data collected under various temperature and humidity conditions. The model achieved a recognition accuracy of 96%. Xie et al. showcased a skin-inspired full-textile pressure sensor integrated with ML for complex recognition tasks.361 The sensor utilized a dip-and-dry approach to fabricate a graphene-coated textile sensor with high sensitivity and a wide detection range. A CNN model was used to extract differentiated features for recognition of human activity and handwritten digits, resulting in high accuracies around 95%.
5.1.4. Intelligent Signal Processing
AI-driven signal processing significantly enhances the functionality of pressure-sensitive e-skins. For example, Xu et al. developed a wireless and flexible tactile array using multiwalled carbon nanotube-doped PDMS, based on radio frequency (RF)-resonator technology.240 This passive RF resonator array, being compact, allows the reader antenna to simultaneously capture resonant signals from all individual sensors. However, this simultaneity can lead to signal overlap, posing challenges in manually discerning distinct signals from each unit. By leveraging multiple ML algorithms, the authors successfully decoupled the signals from a 2 × 2 array with an accuracy of 98.5% for completely unseen data. Similarly, Lee et al. realized a real-time wireless pressure sensor signal decoupling for 3 sensors by adopting a CNN model.362 While the current literature showcases such techniques primarily on limited sample volumes, there remains a need to demonstrate their efficacy at a larger scale. Another function AI can aid for signal processing is to find the position of applied pressure without cross-bars for an individual sensing unit.363,364 This capability can be used to build a keypad from a crude piezoresistive pad by only connecting wires at the edges.
Especially, AI can be used to perform super-resolution pressure sensing. Training is typically performed using a large data set where the low-resolution input data is paired with corresponding high-resolution ground truth data. For example, Yan et al. introduced a soft tactile sensor combined with ML to achieve super resolution. The sensor utilizes a flexible magnetic film and a Hall sensor to measure the normal and shear forces. A deep neural network and a large data set of 40,500 data were used, resulting in an impressive 60-fold super-resolved accuracy from 6 to 0.1 mm. Sun et al. proposed a theory based on taxel-value-isolines (TVIs) for geometric super resolution in tactile sensors and demonstrate its effectiveness by combining it with ML (Figure 12d).50 A data set of 95,000 samples was built for training, validation, and testing using MLP with 10 fully connected hidden layers (100 neurons in each layer). The method achieved a 106-fold super resolution using an ML model for one-dimensional sensors and an astonishing 1260-fold super resolution for 2-dimensional sensors. This study stands out for its theoretical framework, practical implementation, and impressive levels of super-resolution accuracy.
5.2. AI-Empowered Strain-Sensitive E-skins
Similar to the sensation of pressure, strain sensing is also fundamental to e-skins. Strain-sensitive e-skins can be used to monitor strain at various surfaces or joints, enabling a wide range of applications. Most studies are focused on strain monitoring in the human body. Stretching or compression ranging from centimeters to micrometers occurs throughout the surface of the human skin all the time.365 These strains are inherently connected to physiological activities and bodily movements. By incorporating AI techniques, strain-sensitive e-skins deployed at different locations on the human body can capture, interpret, and translate a wide range of human behaviors and physiological characteristics.
5.2.1. Hand
For instance, changes in hand gestures can induce significant strain on finger joints and even lower arms. Therefore, in the reverse manner, it could be feasible to infer gestures by comprehending these strain signals. Zhou et al. proposed a wearable system with yarn-based stretchable sensor arrays (YSSAs) to translate hand gestures using ML, thereby enabling sign-to-speech translation (Figure 13a).42 The YSSAs, made of conductive yarns, are deployed on the back of fingers (or glove fingers) to capture hand gestures. The system employs an SVM model for classification and achieves a high recognition rate of 98.63% for 660 sign language hand gestures, which could greatly improve communication efficiency between signers and nonsigners. Similar research has been conducted on the use of strain-sensitive e-skins for hand gesture recognition, including a KNN model empowered crack-based strain sensor attached to fingers224 (accuracy, 98.9%) and a backpropagation (BP) neural network empowered optical fiber-based strain sensor attached to lower arms366 (accuracy, 96.36%). Moreover, with the fusion of visual and strain data, hand gesture recognition can reach an accuracy of 100%,41 and finger motion recognition is also feasible by attaching strain-sensitive sensors on the wrist.367
Figure 13.
AI-empowered strain-sensitive e-skin applications. (a) Sign-to-speech translation through finger-attached strain sensors and SVM. Image was reproduced with permission from ref (42). Copyright 2020 Spinger Nature. (b) Unsupervised rapid hand task recognition through substrate-less nanomesh strain sensor on fingers and TD-C learning. Image was reproduced with permission from ref (49). Copyright 2022 Springer Nature. (c) Classification and recognition of throat activities through hierarchically resistive strain sensor attached on throat and algorithms of MLP and CNN. Image was reproduced with permission from ref (58). Copyright 2023 Springer Nature.
Complex hand tasks can also be recognized by monitoring hand movements using strain sensors. Kim et al. presented the development of a substrate-less nanomesh receptor coupled with an unsupervised meta-learning framework for user-independent recognition of different hand tasks (Figure 13b).49 The nanomesh receptor, directly printed on the skin, mimics human cutaneous receptors by translating electrical resistance changes from skin stretches into proprioception signals. It can capture finger movements from multiple joints with high conformability and low motion artifact noise. The unsupervised meta-learning framework, called time-dependent contrastive (TD-C) learning, generates a motion feature space (MFS) from unlabeled random hand motions by considering temporal continuity. This MFS is then used to adapt the model to various users and tasks, including command recognition, keyboard typing, and object recognition, with only a few hand signals. The system demonstrates high efficacy, rapid adaptation, and potential applications in human–computer interaction and prosthetics.
5.2.2. Head
When strain sensors are deployed on the face, they can be utilized to recognize facial expressions and speech patterns.241,368−370 For instance, Kim et al. proposed a novel silent speech interface (SSI) by utilizing a strain sensor based on single-crystalline silicon combined with a 3D convolutional deep learning algorithm (CNN).241 The strain sensor consisted of two biaxial strain gauges, and four of them were attached near the mouth to monitor skin deformation during silent speech. The system achieved high accuracy (87.53%) in classifying a large number of words, outperforming other existing SSI methods such as sEMG.
When deploying strain sensors on the throat or neck, they can monitor head motions, swallowing activities, and throat activities.58,225,226,371−373 For instance, Gong et al. introduced a hierarchically resistive (HR) skin sensor for throat activity recognition (Figure 13c).58 The sensor was composed of three layers with different stretchability and resistivity, which enabled it to detect different strain regimes associated with throat speech, heartbeats, breathing, touch, and neck movement. The authors developed a CNN-based Deep Hybrid-Spectro algorithm, achieving an accuracy of 92.73% in classifying 11 different activities, including neck motion, speech, touch, and combinations of them. Ramírez et al. presented a wearable strain sensor consisting of palladium nanoislands on graphene that could detect swallowing activity in head and neck cancer patients.225 The sensor was placed on the submental region, and strain signals were recorded during swallowing. These signals, combined with sEMG readings, were used to develop an ML model that differentiated between swallowing and nonswallowing events, as well as classified different bolus consistencies. The algorithm achieved an accuracy of 94.7% in differentiating between healthy and dysphagic swallows, indicating the potential to remotely monitor dysphagia and provide real-time data for clinicians.
5.2.3. Other Body Parts
Additional functionalities can be unlocked by deploying strain sensors on other parts of the body, such as the chest, elbow, knee, and foot.367,227,374−376 For instance, Yang et al. developed a sensitive strain sensor to reliably detect coughing. The sensor was directly attached to the chest to avoid speech and motion interferences.227 The cough signals were analyzed using spectrum analysis in combination with ML, achieving an accuracy of 99.75% for cough identification. Huang et al. developed a soft strain sensor based on a TPU hybrid that was deployed on human joints to monitor strain changes and classify different types of movements.374 The sensor provided high-resolution data with a high signal-to-noise ratio, and the ML model trained on the data achieved a high accuracy in movement classification. Wang et al. introduced SpiderWalk, a system that detected transportation activities and their circumstances using a contact strain sensor attached on the soles of the feet.375 The sensor captured vibration signals related to different transportation modes, which were then classified by trained random forest classifiers to accurately detect activities, road surfaces, shoe types, and vehicles with an average accuracy of 93.8%.
5.2.4. Beyond Body Strain Sensing
Strain monitoring with AI assistance also shows great potential beyond body monitoring.251,377−380 For example, Maurya et al. proposed a 3D printed graphene-based strain sensor for wirelessly monitoring tire health using ML.377 A two-layer feed-forward neural network (NN) was trained using a data set collected from the strain sensor on the tire due to tie deformation and tire pressure. The neural network achieved a high correlation coefficient (>0.95) when compared to the reference tire pressure values, thus allowing tire pressure monitoring without traditional pressure sensors. Kim et al. introduced a durable crack-based strain sensor for terrain sensing for legged robots.378 The sensor utilized a silver nanowire mesh (Ag NW) as a crack stop layer to control the crack propagation. The sensor was attached to the feet of a bioinspired legged robot, allowing it to detect ground interactions and collect data on terrain type, road inclines, and ground vibrations. A one-dimensional CNN model was adopted to predict terrain type (accuracy, 98.86%) using a relatively small data set and demonstrated the feasibility of real-time terrain recognition and autonomous control in legged robots.
5.3. AI-Empowered Thermosensitive E-skins
Thermosensitive e-skin mimics the temperature sensing ability of human skin, enabling precise monitoring of subtle temperature changes. This capability facilitates the assessment of temperature conditions on the monitored surface and the observation of heat flow variations during interactions. The applications for thermosensitive e-skin encompass a wide range, including health monitoring, prosthetics, robotics, and wearable devices. Despite the limited existing reports on the intersection of thermosensitive e-skin and AI technology, it is worth noting that AI exhibits substantial promise in enhancing and diversifying the utility of thermosensitive e-skin.
In health monitoring, AI has the potential to revolutionize temperature monitoring, simplifying the process while ensuring accuracy, especially concerning the monitoring of the core body temperature. Traditionally, measuring core body temperature, which refers to the temperature within the body, requires invasive methods and is typically restricted to operating rooms. Noninvasive approaches, on the other hand, often rely on heat flux measurements and present limitations either in terms of convenience or accuracy. Zavareh et al. presented a soft wearable thermal device integrated with ML to address such limitations (Figure 14a).381 The sensor comprised multiple temperature sensors separated by insulating materials of different thermal conductivities, which created a well-defined thermal gradient for characterizing the heat flux across the device. As the sensor was attached to the skin surface, inferring the core body temperature requires establishing a relationship between the measured temperature and core body temperature. This relates to individual parameters that are difficult to measure, such as tissue thermal conductivity and heat generation rate. ML algorithms can account for such parameters among individuals using data collected from the proposed sensor and a reference core body temperature device, even in the absence of a well-defined physical relationship. The results showed that the proposed sensors with ML could measure core body temperature with an accuracy within 0.01 °C compared to the reference device. Moreover, the device had low power consumption and could be used wirelessly, making it suitable for long-term monitoring in resource-limited settings.
Figure 14.
AI-empowered thermosensitive e-skin applications. (a) Core body temperature quantification through wearable temperature sensors and regression models. Image was reproduced with permission from ref (381). Copyright 2023 John Wiley and Sons under CC BY-NC 4.0 license. (b) Object recognition through thermal based pressure sensing and thermal conductivity sensing as well as MLP. Image was reproduced with permission from ref (37). Copyright 2020 The American Association for the Advancement of Science.
Impressively, the integration of thermosensitive e-skin with AI can be employed for object and material recognition. Li et al. proposed a quadruple tactile sensor based on thermal effects (Figure 14b).37 They utilized the pressure-dependent thermal conductivity of a porous material to detect pressure, employed a temperature-sensitive metal film to sense object and environment temperatures, and measured the surface heat flux effect to determine the material thermal conductivities of objects. When the human hand comes into contact with objects in the environment, apart from pressure sensation, the sense of cold and heat also constitutes crucial dimensions of information that aid humans in assessing objects. The human skin is capable of perceiving the temperature of object surfaces and ambient temperatures, as well as the flow of heat during hand–object contact. By integrating this information, the human brain rapidly deduces the material (thermal conductivity and hardness), shape, and size of objects. It is almost certain that there exists a physical connection between the mechanical and thermal information during interactions and the properties of objects. However, providing an exact mathematical description of this connection is challenging. AI, on the other hand, can uncover these relationships, even in the absence of mathematical descriptions. Researchers integrated these sensors into a robotic hand and implemented object recognition using an MLP network within the PyTorch framework for the humanoid robotic system. This implementation yielded a noteworthy total classification accuracy of approximately 96% when it was tasked with recognizing diverse objects characterized by varying sizes, shapes, and compositions. Furthermore, the quadruple tactile sensors integrated on a robot hand could be used for practical applications, such as garbage sorting, where they were able to accurately identify and classify different types of garbage.
5.4. AI-Empowered Nanogenerator Based E-skins
In recent years, there has been an emergence of research on nanogenerator-based e-skins. Unlike pressure, strain, and temperature sensors, e-skins based on nanogenerators convert mechanical energy into electrical energy, which transmits dynamic stimuli. Therein, nanogenerator e-skins based on piezoelectric382 and triboelectric383 effects offer a wide range of material choices and exceptional deployment adaptability. This adaptability allows for the conversion of any dynamic pressure or strain signals anywhere into electrical signals. As a result, they have shown great potential across diverse application scenarios, such as health monitoring, robotics, and human–machine interaction. However, the dynamic nature of the signals results in data that are challenging to work with due to the large volume and high complexity. It is difficult to straightforwardly identify features from the data to obtain meaningful information. AI offers a solution to this difficulty by automatically extracting features, discovering patterns and trends, and learning from the signals. This capability enables the development of advanced applications with enhanced functionality based on nanogenerator e-skins.
5.4.1. Vibration Sensing
Nanogenerators are excellent vibration sensors for frequencies ranging from a few hertz to a few kilohertz due to the way they function. For instance, they are ideal for applications in friction-based texture recognition,384−386,100 which can bypass the limitation in poor lighting conditions when using the traditional visual texture recognition method. Yi et al. developed an artificial fingertip with piezoelectric PVDF film sensor for tactile surface roughness discrimination.384 The PVDF film sensor was used to emulate Meissner’s corpuscle which was responsible for sensing vibration. The sensor achieved a classification accuracy of 82.6% using one PVDF film sensor and a kNN classifier. Xing et al. proposed a wearable tactile sensor based on TENG for texture recognition.385 The sensor adopted a single-channel triboelectric sensor with patterned flower-shaped holes and dual-layer shielding to avoid the body’s electrical potential. Recognition accuracies of 96.03% for distinguishing 7 different surface textures and 92.58% for recognizing 5 kinds of fruits and vegetables were achieved with the assistance of ML. Furthermore, Zhao et al. presented a fingerprint-inspired TENG for fine texture recognition with an artificial neural network.386 The sensor was able to recognize disordered textures of 9 sandpapers with an accuracy of 93.33% and 10 Braille characters with an accuracy of 92.5%.
Another application is acoustic sensing. Piezoelectric or triboelectric sensors attached to the throat have a natural advantage in that they can avoid environmental noise interference. Wang et al. developed a highly sensitive and miniaturized piezoelectric mobile acoustic sensor (PMAS) consisting of a stress-controlled piezoelectric membrane for voice recognition and biometric authentication (Figure 15a).45 The PMAS demonstrated outstanding frequency sensitivity and superior signal-to-noise ratio compared to that of a commercial condenser microphone. ML algorithms were employed to achieve biometric authentication based on speaker identification. Impressively, the PMAS module achieved a 56% reduction in error rate compared with the commercial microphone.
Figure 15.
AI-empowered nanogenerator-based e-skin applications. (a) Mobile acoustic sensing using piezoelectric sensors for voice recognition and biometric authentication through ML. From ref (45). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. (b) Human motion prediction based on triboelectric sensor and PCA-assisted ML. Image was reproduced with permission from ref (214). Copyright 2023 Springer Nature under CC BY 4.0 license. (c) Blood pressure estimation based on a textile triboelectric sensor and a specially developed regression model. Image was reproduced with permission from ref (48). Copyright 2021 John Wiley and Sons. (d) Material identification based on triboelectric sensing and LDA. From ref (52). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS.
5.4.2. Human Activity Recognition
Human activities generate strain and pressure signals; these signals, characterized by their low-frequency and slow-varying nature, fall within the operational range of nanogenerators. Integrating with wearable technologies, nanogenerators can be deployed across various body parts, facilitating the collection of these signals and thereby enabling comprehensive monitoring of human activities, such as human motions, gestures, and facial movements, which is invaluable for applications in human–machine interaction.387−389 Moreover, unlike pressure and strain sensors, nanogenerators do not need an additional power supply for themselves, having a unique advantage in wearable applications.
Hand gesture recognition stands out as a compelling and easily demonstrable application, showcasing the integration potential of nanogenerator-based e-skins with AI.218,269,390−394 For instance, Tan et al. reported a wristband sensing system to capture the mechanical information from the surface muscles of the hand for sign language gesture recognition in real time.218 The wristband contained 8 hybrid generators; each hybrid generator had a TENG and a piezoelectric nanogenerator (PENG) to capture mechanical information from motions with slight contact and large force simultaneously. An LDA model was used for feature matching and classification, allowing the system to recognize and classify different sign language gestures based on the feature matrix and gesture labels. The model achieved a maximum prediction accuracy of 92.6% in recognizing sign gestures of 26 letters, offering an efficient, portable, and comfortable solution for the translation of gestures.
AI enhanced nanogenerators are also suitable for human motion prediction,214,395−397 which is a promising technology for applications in human–machine interaction, rehabilitation training, and healthcare. For example, Liu et al. developed a flexible, flame-retardant, and eco-friendly triboelectric sensor using a composite paper made from alginate fibers and vermiculite nanosheets for human motion prediction (Figure 15b).214 The sensor could detect slight motion signals from various joints of the human body due to its superior sensitivity. A principal component analysis (PCA) is performed to reduce the mass data into two meaningful principal components for the ML data set. Further, a KNN algorithm was used to identify different joint movements with an accuracy of 96.2% and a linear regression model was used to predict the bending angle of joints with an accuracy of 99.8%. Human motion detection can also be achieved by deploying nanogenerators in various wearable forms on the human body thanks to the flexibility in material and device structure choices that nanogenerators offer. For instance, Xiong et al. introduced a graphene textile-based TENG that can be worn as yarn at the elbow and knee joints.397 This system, combined with a 1D CNN, was able to recognize four different motion activities with an accuracy of 98.12%.
With the help of AI, nanogenerator based e-skin is able to recognize human handwriting.76,398 Xiang et al. developed a TENG e-skin with dynamic thermoregulating ability, which also showed great functionality on human motion detection and handwriting recognition.76 A CNN algorithm was trained on the voltage signals collected from the e-skin when a person writes different letters and resulted in a recognition accuracy of 98.13%, demonstrating a simple and portable solution for handwriting recognition.
Lip language recognition is also enabled by an AI-empowered nanogenerator. Lu et al. proposed a novel lip language interpretation system using triboelectric sensors to capture lip motion and translate it into a communication language.279 The study analyzed the mechanical and electrical properties of the sensors, collected signal patterns of vowels and words, and compared lip and sound signals. A dilated RNN model based on prototype learning was used for lip language recognition, with a high accuracy of 94.5%. This highlighted its value in improving communication for people with speech disorders. Similarly, AI-assisted voice recognition was demonstrated by attaching triboelectric sensors on a facemask.399
With the intelligence of AI, meaningful information in diverse kinds of nanogenerator signals generated by human activities can be extracted for other applications, such as user authentication,220 smart floors,400 tactile recognition,401 intelligent toilets,402 etc.
5.4.3. Health Monitoring
By combining specialized knowledge from the medical domain, nanogenerators integrated with AI could create an innovative approach to help identify human health conditions. For instance, cardiovascular diseases are among the leading causes of death worldwide; hence, there is a high demand for early detection and intervention. Arterial pulse monitoring is one of the most effective methods. Babu et al. reported a flexible, piezoelectric sensor that could be used for early detection of cardiovascular diseases.403 The sensor was tested on individuals of different age groups and varying weights to assess its ability to detect physiological changes associated with aging and obesity. The study utilized the k-means clustering algorithm to extract features and various ML algorithms to predict cardiovascular risk factors such as age (93.5% accuracy), body mass index (95.3% accuracy), and cardiovascular parameters like reflection index and augmentation index (94.1% accuracy).
Blood pressure estimation is also enabled by AI-empowered nanogenerators. For example, Fang et al. presented a wearable textile triboelectric sensor for continuous and reliable pulse wave monitoring (Figure 15c).48 The sensor was designed to convert subtle skin deformation caused by arterial pulsatility into electrical signals, which demonstrated an impressively high signal-to-noise ratio of 23.3 dB. A published cuffless blood pressure estimation data set and four features of heart rate, augmentation index, reflected wave transit time, and K value were extracted as input for a supervised neural network training,404 generating the systolic and the diastolic blood pressure. Additionally, a customized mobile application was developed for easy health data sharing and data-driven cardiovascular diagnosis.
5.4.4. Material Perception
Triboelectric nanogenerators offer a distinctive method for material identification.52,405,284,406,215,407,408 This uniqueness stems from the inherent connection between the triboelectric signal and the electronegativity of the material used, an intrinsic property of each material. The triboelectric effect is the transfer of electric charge when two materials are brought into contact and then separated. The amount of charge generated depends on the materials’ relative electronegativity, which is a measure of how easily they gain or lose electrons. Materials with high electronegativity tend to gain electrons, while materials with low electronegativity tend to lose electrons. The amplitude and profile of the triboelectric output vary with the electronegativities of the materials involved in contact. Thus, through carefully designed devices and the implementation of AI algorithms, triboelectric signals can be used to identify materials by the differences in intrinsic electronegativities of materials. This is a new promising method for material identification, as it is nondestructive and can be used to identify a wide variety of materials.
Qu et al. demonstrated the effectiveness of material identification based on the triboelectric effect and explained the mechanism in detail. They proposed a tactile perception smart finger for material identification that consists of four different dielectric materials, i.e., polyamide (PA66), polyethylene terephthalate (PET), polystyrene (PS), and polytetrafluoroethylene (PTFE), and aluminum (Al) was used as the back electrode (Figure 15d).52 The selected dielectric materials possessed different electronegativities. As a result, they induced distinct triboelectric series for each individual material, facilitating precise material identification. Additionally, such a multimaterial matrix can effectively mitigate the impact of humidity on the electrical output. This is because humidity does not alter the relative amplitude of the output signals from each contact material. In addition, the sensor also shows specific responses to different roughness. The output signal features of maximum value, minimum–maximum, peak–valley interval, number of zero-crossing points, number of inflection points, and absolute square value were selected for the LDA model training. As a result, the sensor showed a high accuracy of 96.8% for the identification of 12 materials and 96.5% for the identification of 4 kinds of roughness. Although the roughness identification capability was demonstrated in this work, how it was decoupled from the influences of temperature and humidity remains unknown, and the identification resolution and types of rough surfaces are limited. The same group also reported a study that showed how an array of triboelectric sensors integrated with a CNN algorithm helped to eliminate the effects of open-environment conditions (such as size, location, pressure, temperature, and humidity) on material identification.405
Object recognition223,409,410 and environment awareness219 applications for grippers and robots are also available for nanogenerator-based e-skins using strategies similar to those mentioned earlier.
5.5. AI-Empowered Multimodal E-skins
Information generated from interactions is inherently multidimensional, including signals such as pressure, strain, temperature, vibration, chemical reactions, and more. E-skin, essentially a sensor, serves the ultimate purpose of comprehensively representing the status of the target subject being sensed. By incorporating diverse sensor types, we can expand the e-skin ability to gather multidimensional data, enabling a more precise depiction of the subject’s status. This is termed multimodal sensing. Multimodal sensing can effectively bypass many of the limitations inherent to single-modal sensing, consequently amplifying the capabilities of e-skin applications. Furthermore, through the integration of AI algorithms, multimodal signals can be efficiently processed and analyzed, aiding multimodal e-skins to exhibit high accuracy and robustness in applications such as health monitoring,60,411 activity/object recognition,234,412−416 human–machine interaction,83,212,417 etc. In this section, we will present several examples to highlight the typical advantages of AI in multimodal sensing. We will avoid repeating applications similar to those mentioned earlier.
5.5.1. Decoupling
E-skin sensors are often multiresponsive, meaning that they can respond to various stimuli. This may lead to unreliability in the sensing data as it is difficult to distinguish between the different stimuli that are being detected. For example, resistive-type e-skins are commonly sensitive to pressure, strain, temperature, and humidity. However, the output of the device is typically a fused response in resistance or voltage/current, which is challenging to decouple into the individual stimuli. Despite this challenge, there are still some cues that can be used to extract the response of a specific stimulus from the fused signals by using AI.43,418
For example, Lee et al. proposed a highly stretchable cross-reactive sensor matrix combined with an ML model for the detection, classification, and discrimination of various intermixed tactile and thermal stimuli (Figure 16a).43 The sensor matrix exhibited high sensitivity and fast response to stimuli, such as strain, pressure, flexion, and temperature. The sensor operated as a 2-dimensional array. When a node was activated, it triggered a response not only from itself but also from its adjacent nodes. As a result, specific areal patterns and areal gradations of the output signals were generated for specific stimuli. Such a geometry or strength profile physically provided a cue for discriminating various stimuli applied to the e-skin. The authors carefully examined the specific behavior of each stimulus and processed the sensing data as 2-dimensional images. Image-processing algorithms such as edge detection with convolution masks were adopted for generating gradient matrices and further binary matrices. By comparison of the gradient matrices and their specific distribution patterns, gradient polarity, discrimination, and isolation of each tactile stimulus from the intermixed stimuli could be achieved. To efficiently implement this method and predict unseen stimulating patterns, an ML model based on an image categorization and pattern recognition algorithm, bag-of-words model (BoW), was utilized to discriminate and identify each stimulus by recognizing distinct image patterns generated by the intermixed stimuli. The accuracy of the discrimination process was over 90%. This approach presented a pathway toward achieving multimodal sensing through a single type of e-skin sensor. However, for more complex environments and larger-scale e-skin applications, a further exploration of decoupling methods is warranted.
Figure 16.
AI-empowered multimodal e-skin applications. (a) Detect, classify, and discriminate multimodal stimuli of strain, flexion, pressure, and temperature based on a stretchable cross-reactive sensor matrix and BoW model. Image was reproduced with permission from ref (43). Copyright 2020 John Wiley and Sons. (b) Wound monitoring enabled by a multiplexed sensor patch and CNN. From ref (60). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. (c) Virtual shop applications enabled by multimodal sensing and multiple ML models. Image was reproduced with permission from ref (417). Copyright 2021 John Wiley and Sons under CC BY 4.0 license.
5.5.2. Health Monitoring
Multimodal e-skin is useful for a wide range of applications in health monitoring, thanks to its ability to obtain information about the health condition of the human body from different aspects and even to incorporate chemical sensing besides mechanical and thermal sensing. However, the augmentation of sensing modalities can lead to a surge in the data complexity, rendering traditional data analysis methods increasingly inefficient. This is where the strengths of AI come into play. AI excels at deciphering the intricate relationships between the health condition of the human body and the sensory signals e-skin produces. For example, wound healing is a dynamic process comprising multiple phases. Rapidly profiling and quantitatively characterizing inflammation and infection proves challenging due to the involvement of various physical and biochemical markers. To address this challenge, Zheng et al. introduced a paper-based sensor system for comprehensive wound assessment through image recognition using deep learning (Figure 16b).60 They employed a wax-printing technique to integrate five battery-free colorimetric sensors onto a small piece of paper, targeting biomarkers such as temperature, pH, trimethylamine (TMA), uric acid (UA), and moisture. These biomarkers are closely linked to wound inflammation, infection, and the wound environment’s condition. When affixed to a wound, the sensor patch underwent measurable color changes corresponding to each biomarker. Subsequently, the authors harnessed deep learning algorithms, specifically CNN, to analyze patch images captured on a mobile phone. This analysis allowed for the assessment of the wound’s healing status. Impressively, the sensor patch exhibited exceptional accuracy in distinguishing between healing and nonhealing wounds. Perturbed wounds were classified with a remarkable accuracy of 96.3%, while burn wounds were classified with an accuracy of 92.6%. This innovative approach offered a holistic profile of wound healing progress, facilitating the early detection of adverse events and enabling timely clinical intervention. Ultimately, this advancement had the potential to significantly enhance the management of wound care.
5.5.3. Human–Machine Interaction
Human–machine interaction has emerged as a central focus in the realm of e-skin applications. Through the integration of well-coordinated multimodal sensors, e-skins exhibit the capability to capture comprehensive data about humans or robots. This opens the door to applications involving digital twins, ushering in virtual/augmented reality experiences, and anticipating the rise of remote interactive systems such as virtual shopping and unmanned factories. Such systems necessitate wearable sensing solutions that can furnish both human and robot hands, enabling the robot to mirror human actions and relay physical interaction data back to humans. This endeavor requires a detailed analysis of highly complex data, a process that can be streamlined and amplified by leveraging the capabilities of AI. Sun et al. provided an exemplary instance by presenting an augmented virtual shopping experience (Figure 16c).417 They achieved this through incorporating AI and a soft robotic manipulator equipped with a self-powered multimodal e-skin. The manipulator comprised 12 TENG tactile (T-TENG) sensors for detecting contact positions and areas, alongside 3 TENG length (L-TENG) sensors for monitoring bending. Additionally, a PVDF pyroelectric sensor was used for temperature sensing. By employing a digital-twin framework, a virtual shopping environment was created, offering users an immersive shopping experience. This setup allowed the manipulator to navigate a physical shop, collecting object data, including shape, size, and temperature. The system exhibited remarkable accuracy, achieving 97% accuracy in recognizing 28 product categories. The temperature sensor further enabled the identification of hot and cold states, accurately discerning the status of items such as coffee and Pepsi. Consequently, this innovation not only enriched user interactions with virtual objects but also furnished vital product information, allowing users to make well-informed decisions. Such a technique, i.e., the integration of AI and multimodal e-skin, holds transformative potential in reshaping shopping experiences and interactions with physical objects in the digital epoch.
Thus far, we have introduced the remarkable capabilities that various types of e-skin sensors can exhibit with the support of AI. The integration of AI has paved the way for a multitude of groundbreaking opportunities for e-skins, revolutionizing both application scenarios and capabilities. The incorporation of AI has ushered in possibilities that were beyond imagination prior to its advent. The advantages of AI in the applications of e-skins primarily manifest in its capacity for data processing and analysis, distinguishing itself from traditional data management methods of e-skins. The applications of e-skins enabled by AI can be categorized into several aspects, all of which rely on uncovering hidden correlations between sensing signals and targets:
-
(1)
Signal Processing. AI can facilitate signal decoupling, denoising, and enhancing resolution, simplifying e-skin sensor design and elevating the quality of sensing signals. This enhancement results in superior performance and reliability.
-
(2)
Classification Tasks. The majority of AI-enabled e-skin applications center around classification tasks, including recognition tasks spanning objects, materials, activities, gestures, handwriting, and any interactions that occur with e-skins in any applications. These capabilities benefit a number of key functions that are expected for e-skins in areas like human–machine interaction, intelligent robotics, human activity monitoring, health monitoring, and digital twins.
-
(3)
Regressions. AI regression models predict the value of a target based on input sensing data from e-skin, improving the accuracy, efficiency, and reliability of e-skin sensing systems. Through training with the input data, AI models can generate predictions that aid in various scenarios, such as early warning of diseases or real-time feedback for certain applications.
We have a sense that, for the majority of application scenarios, what matters most is to align with the application’s requirements and identify which physical characteristics of the target application are linked to specific detectable signals and designing and deploying e-skin sensors that are suitable for the target application. In other words, the deployment method of sensors is often more influential than the performance of the sensors themselves. This also indirectly underscores that despite the abundance of current cases involving the integration of AI and e-skins, the full potential of AI in e-skin applications has not been exhaustively explored.
6. In-Skin Intelligence
In a broad sense, artificial intelligence refers to the capability to mimic human intelligence, encompassing autonomous learning, memory, and problem-solving. Apart from using software-layer AI technology to empower e-skins, another approach involves the innovative design of devices and systems so that they can perform autonomous computing and decision-making within e-skins. This hardware-layer approach directly imparts certain intelligent functionalities to e-skins. From this viewpoint, we consider such e-skin systems to embody a certain level of artificial intelligence, and we can refer to this as “in-skin intelligence”.
To achieve in-skin intelligence, primary avenues currently being explored include the following: (1) integrating commercial chips with e-skins to implement signal processing, encoding, and AI algorithms; (2) developing flexible large-scale integrated circuit chips that can be integrated with e-skins to implement computation or AI algorithms; (3) specifically designing e-skins so that the sensing unit itself can encode external stimuli or perform certain computational tasks; (4) developing artificial synaptic devices and further integrating with e-skins seamlessly to emulate neural signal processing and learning. With these strategies, near- or in-sensor data processing is enabled, potentially reducing redundancy and latency in data and decreasing subsequent consumption of time and power in data transmission and processing.
6.1. Exploring In-Skin Intelligence with Functional Circuits and Unique Mechanisms
6.1.1. Commercial Chips
Integrating commercial chips with e-skins is the most straightforward approach. There are already a large number of mature solutions for commercial chips that can implement signal processing, encoding, and AI algorithms. By integrating these chips with e-skins, it is possible to create a system that can achieve powerful in-skin intelligence.35,419,420 For instance, signal transmission for larger-scale e-skin has been a critical chokepoint due to readout latency bottlenecks and complex wiring as the number of sensors increases. To address this, Lee et al. introduced an innovative e-skin system design that employed event-based signal acquisition through asynchronous coding (Figure 17a).35 In this system, each sensing unit was equipped with a dedicated microcontroller for signal encoding. The encoded signals stemming from all sensors were then conveyed through a single wire. The microcontroller generated a unique pulse signature for each sensing unit, which encoded the address information. A voltage divider was used to measure the resistance change of each sensor, and the corresponding voltage was sampled by an analog-to-digital converter (ADC) on the microcontroller. The microcontroller was programmed to operate in fast-adapting (FA) and slow-adapting (SA) modes. In FA mode, a positive or negative pulse signature was transmitted for a pressure increase or decrease event, respectively. In the SA mode, the microcontroller generated events at intervals proportional to the ADC value. All pulse signatures were transmitted over a single conductor and sampled by using a 125 MHz oscilloscope. Decoding was performed by correlating the pulse signatures in the pulse train. This design enabled the e-skin system to simultaneously transmit a large array of thermotactile signals from up to 10,000 sensors with a constant low readout latency of 1 ms and an ultrahigh temporal precision of less than 60 ns.
Figure 17.
In-skin intelligent e-skin systems. (a) A neuroinspired event-based asynchronous encoding system for e-skin signal transmission. Image was reproduced with permission from ref (35). Copyright 2019 American Association for the Advancement of Science. (b) Low-voltage-driven artificial soft e-skin system enabled biomimetic bidirectional signal transmission. Image was reproduced with permission from ref (61). Copyright 2023 American Association for the Advancement of Science. (c) In-sensor tactile position encoding using spikes tuned by the ion electron relaxation effect. Image was reproduced with permission from ref (54). Copyright 2022 The American Association for the Advancement of Science.
6.1.2. Flexible Chips
While the integration of commercial chips with e-skins offers powerful and highly flexible capabilities, the rigid mechanical properties of these chips can limit their compatibility with the flexible nature of e-skins. To address this, there is a shift toward developing flexible large-scale integrated circuit chips. In recent years, there has been extensive research on the materials, structures, and fabrication processes for flexible and even stretchable transistor devices, laying the foundation for large-scale flexible integrated circuits.421−425 Wang et al. created a monolithic soft prosthetic e-skin system with an integrated stretchable medium-scale functional circuit mimicking the sensory feedback and mechanical properties of biological skin (Figure 17b).61 The authors designed stretchable organic transistors with low voltage and high mobility, enabling efficient and low-power operation. Subsequently, they integrated 54 intrinsically stretchable transistors into a circuit functioning as a ring oscillator and an edge detector, converting tactile inputs into frequency-modulated pulse trains. These pulse trains aligned with biological action potentials due to carefully designed parameters of the circuits, allowing the signals to be fed into the somatosensory cortex of a living rat to trigger a feedback response at the motor cortex. In addition, this system also featured an artificial synapse that responds to different levels of pressure, stimulating the downstream muscle movements. In the end, the e-skin system successfully emulated the sensorimotor loop in a bidirectional manner, demonstrating the ability of the e-skin to detect and respond to external stimuli as well as evoke neuronal responses and muscle contractions in animal models. Overall, this work represents a significant advancement in the development of soft, biomimetic e-skin with in-skin intelligence for applications in prosthetics and robotics.
In addition, some commercial companies are also committed to developing flexible, large-scale integrated circuits. For example, Arm Ltd. reported a flexible processor chip that could embed artificial intelligence algorithms for food safety monitoring.38 They further manufactured a powerful flexible 32-bit Arm microprocessor consisting of 39,157 thin-film transistors and 17,183 resistors on the polyimide substrate.46 However, this technology is still unable to achieve stretchability and needs further development.
6.1.3. Unique Mechanisms
An alternative approach to achieving in-skin intelligence is to devise a unique operational mechanism for the sensor itself, enabling it to autonomously implement computational tasks such as encoding. For example, Kim et al. devised a compelling mechanism for an artificial sensory system based on a position-encoded spike spectrum (Figure 17c).54 The system consisted of artificial receptors made of a mixed ion–electron conductor that generated potential spikes in response to external stimulation. Within a sensor array, certain sensor units were intentionally designed with long ion relaxation times (>10 ms), while others possessed a shorter ion relaxation time of 2 ms. The units with long ion relaxation times were designated as “characteristic receptors”, while the remainder were termed “auxiliary receptors”. Each characteristic receptor was surrounded by eight auxiliary receptors. Upon contact with an object, the receptors generated both characteristic spikes and auxiliary spikes. As the characteristic spikes exhibited different relaxation times, they could be distinguished effectively for position decoding. Simultaneously, the auxiliary receptors were activated in response to changes in the contact area due to pressure, thereby facilitating the estimation of contact pressure. Note that high spatial resolution can be achieved by increasing the density of the characteristic receptors. This ingenious mechanism design empowered the artificial sensory system to instantaneously recognize dynamic stimuli, such as object slip and rolling. Consequently, this system enabled the real-time manipulation of objects in a robotic context, such as slip prevention during grasping. While this approach is highly creative, it does come with certain limitations, such as challenges in scaling up the array size and accurately measuring pressure.
6.2. Exploring In-Skin Intelligence with Artificial Synaptic Devices
Artificial synaptic devices such as memristors and synaptic transistors output modulated postsynaptic current or implement synaptic weight update by relying on the history of presynaptic input and can be thus used for neuromorphic computing.426−429 The presynaptic input can originate from different stimulus modalities including mechanical,430 optical,431 and multimodal (e.g., mechanical–optical).432 By integrating into artificial or biological neural systems, these devices can also function as the synapses of artificial afferent and efferent nerves.433−435
6.2.1. Data Preprocessing
Artificial synaptic devices can play a role in data preprocessing for pattern recognition, such as contrast enhancement and noise reduction of image patterns.436,437 Tactile sensing with in-skin intelligence of data preprocessing is typically represented by correlating tactile information to a pattern for data preprocessing via the integration of sensor array with synaptic device. For instance, connecting a 5 × 5 piezoresistive pressure sensor array and a synaptic memristor yielded a postsynaptic current modulated by the pressure sensing signal from the sensor array.438 This contributed to a handwriting-induced pressure mapping image with higher contrast and correspondingly a more reliable feature input for subsequent ML assisted handwritten letter recognition, thus realizing higher recognition accuracy. Incorporating piezoelectric nanowires into a memristor realized in-sensor data preprocessing for better force-image recognition, potentially simplifying tactile sensing circuits and reducing power consumption.439 A sensor array can detect more complex tactile information but produces a larger amount of sensory data in comparison to a single sensor. By virtue of synaptic plasticity, synaptic devices can be used for data preprocessing by outputting modulated postsynaptic current and reducing the number of readouts needed to map tactile information. This may render new e-skin architectures for information encoding and feature extraction with a reduced data dimension, thereby facilitating efficient learning. A proof-of-concept experiment was demonstrated with dimension-reduced features derived from pressure sensing data collected by a 5 × 5 piezoresistive pressure sensor array.440 By separating the sensor array into 5 rows followed by connecting in parallel to 5 analog-to-digital circuits coupled with LEDs and 5 optoelectronic memristors, sensory data of 5 pressure sensors in each row was converted to an optical spiking signal and further to a current spiking signal. This achieved spatial and temporal encoding of tactile information via current spiking during letter handwriting strokes as a way of reducing the data dimension. Specifically, the data dimension of extracted features was reduced from a 25-dimensional data feature (from the sensor array) to a 5-dimensional postsynaptic current spiking feature (from the memristors). Reduced data dimensions later resulted in efficient recognition of handwritten letters of the whole alphabet.
6.2.2. Hardware-Layer Neural Networks
Synaptic devices have the potential to realize in-skin intelligence as hardware-layer neural networks like software-layer neural networks. This holds the promise of distributed and parallel data processing in large-area e-skins to achieve the functions of learning and memory. Thus, flexible/stretchable synaptic devices are needed to integrate with flexible/stretchable e-skin sensors, which can locally process and analyze sensory data from sensors. The switching layer and work principle of synaptic devices have placed obstacles in the path toward flexibility/stretchability, leading to scarcity in the reported flexible/stretchable synaptic devices, especially stretchable uniform large array.441−446 We provide some examples to show the progress in this field. A flexible synaptic transistor was developed using a silicon–indium–zinc-oxide (SIZO)/ion gel hybrid structure on a poly(4-vinylphenol) (PVP)-coated flexible polyimide substrate, and the conductance of the SIZO semiconducting channel was modulated via the movement of mobile ions in the ion gel driven by applied voltage pulse (Figure 18a).247 Its synaptic behavior was stable under mechanical deformation (bending to a radius of 5 mm for 1500 cycles). By patterning the sensory signal response of 15 stretchable resistive sensors (placed on the 15 finger joints of a hand separately) as the input and 8 corresponding hand signs as the output, a flexible synaptic transistor array was trained to recognize the 8 hand signs in the simulation. This achieved a hardware-layer neural network and potential application in a wearable sign language translation system. Further, an impressive work has achieved an ultrathin flexible synaptic memristor (∼55 nm in thickness) on a thin flexible SU-8/parylene substrate (1.5 μm in thickness, peeled off from glass) (Figure 18b).57 This flexible memristor was able to stand with in-plane uniaxial compression up to 60% strain, and its conductance was modulated by the degree of oxygen vacancy migration in the titanium oxide layer driven by a programming voltage (Figure 18b(i)). A flexible ultrathin organic photodiode was also fabricated on this thin substrate. By using two flexible photodiodes attached on two surfaces of an index finger (top and side surfaces) to detect the proximity of two different light sources (green and red light) in two orthogonal directions, the photodiodes could track the finger motion in 3-dimensional free space during finger writing (Figure 18b(ii)). The resultant time-resolved electrical signals of the two photodiodes containing the finger motion information in two orthogonal directions were converted to a time-dependent 3-dimensional trajectory which was further projected into a 2-dimensional image. Besides, it was shown that long-term potentiation (LTP) of the flexible memristor enabled synaptic weight updates, indicating feature learning capability. Then a large flexible memristor array was simulated and trained to recognize the 2-dimensional images of finger-written digit patterns from 0 to 9 (Figure 18b(iii)), functioning as a hardware-layer neural network for pattern learning and recognition even if a 60% strain was experienced for 100 cycles. A large-scale flexible pressure sensor array (64 × 64) was also proposed to integrate with a memristor-based computing-in-memory chip that implemented an artificial neural network, facilitating both signal processing and recognition tasks.304 Remarkably, this system achieved 98.8% accuracy in recognizing handwritten digits and 98.3% accuracy in deciphering handwritten Chinese characters. Besides tactile signals, synaptic devices could also be used to process and analyze physiological signals for healthcare and human–machine interfaces.281,444
Figure 18.
Hardware-layer neural networks based on artificial synaptic devices for in-skin data processing and analysis. (a) Integrating flexible synaptic transistor array (i) with stretchable resistive sensors for simulated sign language translation (ii). From ref (247). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. (b) Integrating ultrathin synaptic memristor array with two ultrathin organic photodiodes for simulated finger writing recognition. (i) Synaptic behaviors of the memristor on a hand-like replica. (ii) The photodiodes attached on an index finger. The three-dimensional curve of time-resolved output voltage changes during finger motion was transformed to a 2-dimensional image for digit 3. (iii) Simulated finger writing recognition process using the 2-dimensional image and the memristor array. Image was reproduced with permission from ref (57). Copyright 2023 Springer Nature.
6.2.3. In-Skin Adaptivity for Robots
Humans can develop adaptivity via skin to sense and process tactile information. It is natural to develop robots with adaptivity via e-skins to sense and process tactile information as well. By integrating sensors and artificial synaptic devices in e-skins to interface with actuation control units, robots can react differently in real time upon different sensory stimuli. This is meaningful to robotic manipulation, locomotive control, and robot intelligence, such as learning and memory. Flexible, especially stretchable, synaptic devices are in demand for in-skin adaptivity in soft robots, which usually generate large deformation during manipulation and locomotion. A stretchable synaptic transistor was thus developed fully based on rubbery composites including electrodes, ion gel dielectric, and semiconducting channel, where the conductance of the channel was modulated via the migration of ions in the ion gel driven by gate voltage pulse.443 Synaptic characteristics such as excitatory postsynaptic current (EPSC), filtering, and memory were achieved, which were still retained at a stretching strain of 50%. To construct an adaptive pneumatic soft robot, a rubbery triboelectric sensor was designed to realize a tactile sense, which generated voltage pulses during human tapping. A rubbery rectifier was further developed to filter out the unwanted pulse direction in order to connect with the synaptic transistor. An increasing number of tapping times led to an increase in the number of presynaptic voltage pulses and correspondingly larger EPSC which could be connected to the actuation control units to change the locomotion of the soft robot. Specifically, with the assistance of the sensor, rectifier, and synaptic transistor of the e-skin covering a robot surface, this soft robot could adjust the turn angle via tapping the sensor for different times, performing adaptive locomotion (Figure 19a). More e-skins could be deployed on different surfaces for more complex adaptive locomotion. Using different types of sensors and integrating multimodal sensory information through a synaptic device are also important to improve the adaptivity of robots to react properly in more sophisticated scenarios. For instance, optical and pressure sensory information could be integrated to guide a robotic hand to catch a ball placed at different positions.447 Besides the control of manipulation and locomotion, synaptic devices can endow robots with in-skin learning (Figure 19b). To this end, a flexible synaptic transistor based on a zinc oxide nanowires (ZnO NWs) channel was developed, where the channel conductance was modulated by the movement and trapping/detrapping of charge carriers to the surface states from ZnO NWs driven by the gate voltage.55 The synaptic transistor was further used to connect a sensory neuron circuit and a cuneate neuron circuit (Figure 19b(i)). The sensory neuron circuit generated a more biolike spiking pattern containing depolarization and hyperpolarization stages, i.e., bidirectional electrical output. This bidirectional spiking pulse could not lead to the synaptic weight change of the synaptic transistor, thereby making the cuneate neuron circuit silent (Figure 19b(ii)). After a teacher signal was applied to the synaptic transistor contemporarily with a touch, synaptic weight was updated (i.e., increased postsynaptic current), and the cuneate neuron circuit was excited, which was regarded as associated learning. This associated learning process was used to realize a pain reflex in a robotic hand after connecting with the actuation control unit, where the robotic hand was “taught” with a large touch force to acquire the knowledge of “pain” sense and would withdraw during later force stimulation (Figure 19b(iii)).
Figure 19.
In-skin adaptivity for robots enabled by artificial synaptic devices. (a) Integrating stretchable synaptic transistors with stretchable triboelectric sensors for programmable control of a pneumatic soft robot for adaptive locomotion. From ref (443). Copyright The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a CC BY-NC 4.0 license http://creativecommons.org/licenses/by-nc/4.0/. Reprinted with permission from AAAS. (b) In-skin teaching and learning of a robotic hand to acquire a pain reflex based on an artificial tactile neural pathway. (i) Illustration of the artificial tactile neural pathway. (ii) Input spiking signals (left) and postsynaptic currents (right) of the synaptic transistor before and after involving a teacher signal. (iii) The adaptive motion (withdrawal for self-protection) after teaching and learning. Image was reproduced with permission from ref (55). Copyright 2022 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science.
7. Challenges and Outlook
E-skins generally possess several advantageous features in sensing systems. Mechanical properties of e-skins are noteworthy; some e-skins demonstrate excellent durability, resisting external damage and mechanical stress.89,183,374 Some e-skins feature conformal,207−210 self-healing199,203 capabilities that enable them to withstand bending and stretching without compromising their functionality. Moreover, they have the capability for integrating multiple sensing modalities, such as pressure, temperature, and humidity, into a single device.60,129,159−161,448 The interdisciplinary nature of e-skin with AI makes e-skin systems an attractive solution for various applications including object recognition,223,409,410 environment awareness in robotics,219 monitoring of physiological activities and bodily movements in healthcare,351,352,403,449 as well as human–computer interaction.
Looking ahead, e-skins with AI could be set to make big waves in two main sectors. Initially, e-skins tailored for biological entities450 will enhance preventative, diagnostic, and therapeutic capabilities on both humans and animals by integrating real-time health monitoring with AI-driven data analysis. Additionally, it will aid in restoring and enhancing sensing functions through the integration of neuroscience knowledge and AI-modulated interfacial signals, specifically enabling a natural tactile experience with prosthetic devices.451 Conversely, the second sector involves e-skins for inanimate entities such as humanoids, industrial robots, and smart devices needing flexible sensing interfaces. It will endow them with human-like tactile perception, enhancing both their sensitivity and operational efficiency. These advances will push humans and robots beyond their current limits, paving the way for a future society where individuals benefit from personalized enhancements and robots handle most manual labor. Nonetheless, obstacles must be overcome to realize this vision, and we believe the following topics warrant further exploration (Figure 20).
Figure 20.

Future directions of AI-enabled e-skins. Articles on e-skins with AI have surged by over 260%, adding more than 5,000 in under five years since 2020.482
In AI-enabled e-skin design, the burdened development is largely due to a lack of high-quality data. Unlike the situation at the molecular scale, high-quality data for materials and sensors are insufficient in existing databases. Labor-intensive experiments or high-throughput simulation/computation are often needed to acquire data for AI model training and validation, and poor repeatability of device performance is unfavorable for acquiring massive and reliable data. Separately, it is challenging to choose valuable design factors and performance targets and map out a design workflow that may need an ensemble of multiple different AI models. To address these challenges, a potential solution is to develop specialized AI tools to extract the distributed and unstructured data from the existing literature and build an open-source database. This database would feature standardized data presentations of materials and sensors used in e-skins. Potential tools include ChemDataExtractor,452 IBM DeepSearch,453 and LLMs. Such a resource, requiring the input of the entire e-skin community, is of great value to advance e-skin design. Another solution is to develop AI-assisted robotic platforms to autonomize fabrication and characterization with minimal human intervention. However, this would likely be an arduous task despite some previous knowledge of robotic platforms.454−456 Future directions also include model development with state-of-the-art LLMs and other generative models to generate new design hypotheses. LLMs show superiority in vision/text generation, and fine-tuned LLMs can be successful in interpreting domain-specific knowledge and provide reasoning for certain designs. In addition, AI-enabled design of single-modal sensors including more design features can enrich the design space, and AI-enabled design of multimodal sensors is a promising direction to design e-skin systems suitable for real-world applications, such as automatically reconfigurable sensor systems.457
In the software layer of AI, there are concerns about multimodal AI, explainable AI, and ethics. Since multimodal sensing is a recognized trend in e-skins, multimodal AI models are desired to learn and analyze multimodal sensory data. This will enhance the capability of e-skin systems to perform complex tasks efficiently. Furthermore, real-world scenarios involve dynamic interactions and interference, which require AI models with generalizability and credit; an explainable AI model enables us to understand the decision-making process. By analyzing the model’s reasoning, developers can identify crucial input signals and logical rules, while filtering out irrelevant signal inputs. The implementation of AI algorithms in e-skins also raises several concerns in ethics. Wearable e-skins for long-term health monitoring record many signals, which would raise privacy issues. Deidentifying information should be considered by restraining parameters or lowering the sampling frequency to an appropriate range without capturing excessive signals. Besides privacy-related concerns, AI models may bring mistakes or bias, leading to faulty decision-making. This highlights the importance of human intervention in the development of AI-enabled e-skin systems, and the implementation of accountability should be carefully considered in advance for an e-skin system where decisions and actions are automated. Additionally, the rise of AI technology has heightened concerns regarding its high energy consumption, which limits its widespread applications.458−460 Therefore, it is essential to develop energy-efficient AI algorithms for e-skin applications that can save energy, enable local processing, and speed up AI tasks by reducing the data sent to cloud servers. Potential strategies may include the design of specialized algorithms by an in-depth understanding of the underlying physics involved in tasks, the refinement of model parameters to reduce redundancy, the adjustment of computational precision as needed, and the design of dedicated neural network hardware.
In the hardware layer of AI, issues that can be addressed are large scale integration, uniform synaptic device arrays, and in situ computing. Presently, a single stretchable transistor is still at the scale of hundreds of micrometers, and advancements in materials, devices, and fabrication methods can realize large-scale integration (LSI) of circuits that integrate computing functions with all of the sensor components of e-skin systems for on-sensor-site computing. A specific area of interest is synaptic arrays; currently, hardware-layer neural networks are typically demonstrated through simulation due to challenges in fabricating large uniform arrays. Separately, existing applications of neuromorphic sensory computing systems are predominantly focused on pattern recognition, which presents challenges in data transformation such as temporal data transformation; therefore, there is a need to design sensors with efficient patterning methods taken into consideration. Emerging technologies such as edge computing461,462 and quantum computing463,464 can be integrated into AI-enabled e-skin systems to enhance their efficiency. Edge computing could significantly improve real-time processing capabilities, reduce demands on data transmission bandwidth, and minimize the need for extensive hardware wiring, while quantum computing might offer breakthroughs in handling complex computations at unprecedented speeds. We also envision the emergence of novel e-skin sensors that can achieve “intelligent” responses to external stimuli at the physical mechanism level, leading to the development of intrinsic intelligence in e-skins. These intelligent responses may involve filtering, encoding, computation, judgment, inference, storage, learning, and more, effectively realizing a fusion of sensing and computing.
In terms of AI-enabled e-skin applications, we identify three limitations: scale and diversity of data sets, standards in evaluation, and advancement of AI algorithms. First, current data sets of e-skin applications are generally small, especially in classification tasks where the number of distinct categories typically ranges from a few to a few dozen and the diversity of the real world is lacking. Larger and more diverse data sets are expected to clarify the practicability in e-skin applications. Second, the assessment of model performance and control strategies relies upon the utilization of common data sets and evaluation metrics. For instance, the Kinetics motion video data set has emerged as a standard benchmark for the development and assessment of video classification models, while the Yale-CMU-Berkley (YCB) object data set has widespread usage in the domain of robot manipulation. Although many e-skin applications are similar, such as object recognition and material classification, performing meaningful cross-comparisons of sensor performance and AI model performance across different research is difficult without a common data set; high accuracy percentages in such contexts lose the ability to provide a meaningful basis for comparison. Third, the AI algorithms used are mostly traditional ML algorithms or low-level deep learning algorithms with limited exposure to the latest AI technology. The interpretation relies on handcrafted features, and these algorithms often encounter performance bottlenecks when processing and analyzing complex e-skin data. Moreover, most of the reported works concentrate on the device layer or application layer, with AI limited as an auxiliary rather than systematic integration of AI across all technology layers of e-skins.
Hence, we anticipate several key forthcoming developments: (i) the application of large-data sets in e-skin technology to facilitate real-world applications capable of handling unseen interactions; (ii) the creation of robust and unified e-skin AI models that eliminate the need for fine-tuning or retraining; (iii) the use of AI systems that interpret temporal data from e-skins to understand dynamic interactions, predict human actions, and enable adaptive responses in robots; and (iv) the introduction of intelligent error recognition and compensation to identify and correct potential errors in e-skin sensors or to notify users of such errors. These will allow AI-enabled e-skins to match or outperform human perception and enhance various aspects of our daily lives.
Besides the research-related topics above, regulations and standards are important in establishing guidelines for development and deployment of AI-enabled e-skins. E-skins, with their advancing capabilities, can acquire sensitive biometric signals and patterns. As such, well-defined guidelines should cover aspects such as user consent, data encryption, and access control to ensure secure handling of sensitive information throughout the entire lifecycle of AI-enabled e-skins, from development to deployment. Moreover, standardization efforts are necessary to ensure interoperability between e-skin systems and to promote the development of multimodal AI models. Eventually, a robust framework will be created that supports the safe and effective integration of AI-enabled e-skins into our daily lives.
It is also crucial to consider the broader implications of AI with e-skins on employment, accessibility, and equity since this technology has the potential to transform various aspects of our lives. The question of whether AI will take over human tasks is long debated in the field of artificial intelligence.465,466 While new job opportunities may emerge, it is essential to prepare for these changes by investing in retraining and upskilling. Moreover, similar to AI technology, AI-enabled e-skins could create new roles that require human judgment, creativity, and domain knowledge, which cannot be easily replicated.467,468 As for accessibility and equity, many works prove that AI-enabled e-skins can facilitate enhancing accessibility for individuals with disabilities and allow them to interact effectively.58,227,267,279,367 Researchers and developers should consider accessibility and affordability in implementing AI-enabled e-skins to ensure everyone has access to affordable and effective AI-enabled e-skin solutions.
Market-related concerns also warrant close attention. In recent years, e-skins have shown significant growth in global market size, initially valued at USD 4.7 to 8.7 billion annually from 2021 and 2023 and expected to reach between USD 25.3 and 36.7 billion by the early 2030s.469−472 Although e-skin products have gained considerable scale in the market, their diversity and potential have not yet been fully realized. Currently, skin patches and e-skin wearables dominate the market, while other forms of e-skin applications, such as sense restoration in prosthetics and robotic skin, largely remain at the prototype stage.469−472 These products face major challenges, including sensor performance issues473 like hysteresis, stability, and consistency, as well as biocompatibility, integration, material and manufacturing costs, and manufacturing challenges (e.g., large scale, mass production, and quality control).
Additionally, constructing a complete e-skin system requires addressing associated functions such as signal processing, signal transmission, system power supply, and the integration of functionally matched materials. Integration with energy-harvesting technologies,474,475 such as solar,476 kinetic,477 thermal,478 and ambient electromagnetic479 energy, along with new functional materials including biodegradable,187 self-healing,196 breathable,204 moisture-permeable,480 and shape-memory481 materials could address the need for long-term functionality with minimal energy transfer steps. Meanwhile, the shortage of professionals, an incomplete industrial chain, the absence of unified industry standards, and medical compliance issues also impede further commercialization. Ultimately, from the user’s perspective, the practical application of products must meet the criteria of convenience, comfort, effectiveness, and cost-efficiency. Looking forward, it is anticipated that by strengthening cooperation between research and industry, these challenges can be overcome, thereby promoting the market expansion of e-skins and benefiting more people.
AI’s ability to generate multiple applications that tackle high prediction accuracies will undoubtedly spur innovations, energizing further advancements in robotics, prosthetics, medicine, healthcare, and safe human–machine interfaces. Robots and systems with e-skins, capable of operating in diverse and extreme environments such as deep sea or space, could potentially alleviate the global labor shortage stemming from an aging population by assisting the elderly and fulfilling roles once limited to the realm of science fiction.
Acknowledgments
This work is supported by the Agency for Science Technology and Research (A*STAR) grants M23NBK0090 and A18A8b0059.
Biographies
Xuemei Fu received her Ph.D. in Chemistry and Physics of Polymers at Fudan University and her B.Eng. in Polymer Materials and Engineering at Nanchang University. Her research interests are mainly on the design of polymer composites for flexible wearable electronic devices, especially fiber-shaped devices. She is currently working as a postdoctoral research fellow under the Department of Materials Science and Engineering at the National University of Singapore.
Wen Cheng earned his Ph.D. in Electronic Science and Engineering from Nanjing University and his Bachelor of Engineering in Electronic Information Engineering from the Hefei University of Technology. He specializes in flexible electronics, biomimetic sensors, and electronic skin for health monitoring and robotics. His expertise extends to pressure sensing, materials perception, innovative sensor mechanisms, and flexible circuit fabrication processes. He is currently working as a postdoctoral research fellow at the National University of Singapore’s Department of Materials Science and Engineering.
Guanxiang Wan received his Ph.D. in Materials Science and Engineering from the National University of Singapore. His research focuses on self-healing materials and their applications in optoelectronics.
Zijie Yang is currently a Ph.D. candidate from Department of Material Science and Engineering, National University of Singapore. He received his B.Eng. and M.Eng. in Electrical Engineering from Southwest Jiaotong University, Chengdu, China. His research focus is artificial intelligence-assisted colorimetric healthcare materials.
Benjamin C. K. Tee is a tenured Associate Professor at the National University of Singapore (NUS). He received his Ph.D. in Electrical Engineering from Stanford University in 2013 and B.Eng. in Electrical Engineering from the University of Michigan–Ann Arbor in 2003. He leads the Sensors.AI Research Lab inventing novel materials and nanomicrofabrication techniques to create cutting-edge flexible and stretchable electronic sensor devices for impactful applications ranging from biomedical technologies to human–machine interfaces.
Author Contributions
# X. Fu, W. Cheng, G. Wan, and Z. Yang contributed equally to this work; B. C. K. Tee supervised this work.
The authors declare no competing financial interest.
Special Issue
Published as part of Chemical Reviewsvirtual special issue “Wearable Devices”.
References
- Handler A.; Ginty D. D. The Mechanosensory Neurons of Touch and Their Mechanisms of Activation. Nat. Rev. Neurosci. 2021, 22, 521–537. 10.1038/s41583-021-00489-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zimmerman A.; Bai L.; Ginty D. D. The Gentle Touch Receptors of Mammalian Skin. Science 2014, 346, 950–954. 10.1126/science.1254229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGlone F.; Reilly D. The Cutaneous Sensory System. Neurosci. Biobehav. Rev. 2010, 34, 148–159. 10.1016/j.neubiorev.2009.08.004. [DOI] [PubMed] [Google Scholar]
- Erzurumlu R. S.; Murakami Y.; Rijli F. M. Mapping the Face in the Somatosensory Brainstem. Nat. Rev. Neurosci. 2010, 11, 252–263. 10.1038/nrn2804. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johansson R. S.; Flanagan J. R. Coding and Use of Tactile Signals from the Fingertips in Object Manipulation Tasks. Nat. Rev. Neurosci. 2009, 10, 345–359. 10.1038/nrn2621. [DOI] [PubMed] [Google Scholar]
- Hammock M. L.; Chortos A.; Tee B. C. K.; Tok J. B. H.; Bao Z. 25th Anniversary Article: The Evolution of Electronic Skin (E-Skin): A Brief History, Design Considerations, and Recent Progress. Adv. Mater. 2013, 25, 5997–6038. 10.1002/adma.201302240. [DOI] [PubMed] [Google Scholar]
- Wang X.; Dong L.; Zhang H.; Yu R.; Pan C.; Wang Z. L. Recent Progress in Electronic Skin. Adv. Sci. 2015, 2, 1500169. 10.1002/advs.201500169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Someya T.; Amagai M. Toward a New Generation of Smart Skins. Nat. Biotechnol. 2019, 37, 382–388. 10.1038/s41587-019-0079-1. [DOI] [PubMed] [Google Scholar]
- Yang J. C.; Mun J.; Kwon S. Y.; Park S.; Bao Z.; Park S. Electronic Skin: Recent Progress and Future Prospects for Skin-Attachable Devices for Health Monitoring, Robotics, and Prosthetics. Adv. Mater. 2019, 31, 1904765 10.1002/adma.201904765. [DOI] [PubMed] [Google Scholar]
- Ray T. R.; Choi J.; Bandodkar A. J.; Krishnan S.; Gutruf P.; Tian L.; Ghaffari R.; Rogers J. A. Bio-Integrated Wearable Systems: A Comprehensive Review. Chem. Rev. 2019, 119, 5461–5533. 10.1021/acs.chemrev.8b00573. [DOI] [PubMed] [Google Scholar]
- Gao W.; Ota H.; Kiriya D.; Takei K.; Javey A. Flexible Electronics toward Wearable Sensing. Acc. Chem. Res. 2019, 52, 523–533. 10.1021/acs.accounts.8b00500. [DOI] [PubMed] [Google Scholar]
- Zhang Z.; Wen F.; Sun Z.; Guo X.; He T.; Lee C. Artificial Intelligence-Enabled Sensing Technologies in the 5G/Internet of Things Era: From Virtual Reality/Augmented Reality to the Digital Twin. Adv. Intell. Syst. 2022, 4, 2100228 10.1002/aisy.202100228. [DOI] [Google Scholar]
- Badi M.; Wurth S.; Scarpato I.; Roussinova E.; Losanno E.; Bogaard A.; Delacombaz M.; Borgognon S.; C̆vanc̆ara P.; Fallegger F.; et al. Intrafascicular Peripheral Nerve Stimulation Produces Fine Functional Hand Movements in Primates. Sci. Transl. Med. 2021, 13, eabg6463 10.1126/scitranslmed.abg6463. [DOI] [PubMed] [Google Scholar]
- Losanno E.; Mender M.; Chestek C.; Shokur S.; Micera S. Neurotechnologies to Restore Hand Functions. Nat. Rev. Bioeng. 2023, 1, 390–407. 10.1038/s44222-023-00054-4. [DOI] [Google Scholar]
- Bensmaia S. J.; Tyler D. J.; Micera S. Restoration of Sensory Information via Bionic Hands. Nat. Biomed. Eng. 2023, 7, 443–455. 10.1038/s41551-020-00630-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y.; Adam M. L.; Zhao Y.; Zheng W.; Gao L.; Yin Z.; Zhao H. Machine Learning-Enhanced Flexible Mechanical Sensing. Nano-Micro Lett. 2023, 15, 55. 10.1007/s40820-023-01013-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu C.; Solomon S. A.; Gao W. Artificial Intelligence-Powered Electronic Skin. Nat. Mach. Intell. 2023, 5, 1344–1355. 10.1038/s42256-023-00760-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bishop C. M.Pattern Recognition and Machine Learning; Springer-Verlag: New York, 2006. [Google Scholar]
- LeCun Y.; Bengio Y.; Hinton G. Deep Learning. Nature 2015, 521, 436–444. 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- Wurman P. R.; Barrett S.; Kawamoto K.; MacGlashan J.; Subramanian K.; Walsh T. J.; Capobianco R.; Devlic A.; Eckert F.; Fuchs F.; et al. Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning. Nature 2022, 602, 223–228. 10.1038/s41586-021-04357-7. [DOI] [PubMed] [Google Scholar]
- Wang H.; Fu T.; Du Y.; Gao W.; Huang K.; Liu Z.; Chandak P.; Liu S.; Van Katwyk P.; Deac A.; et al. Scientific Discovery in the Age of Artificial Intelligence. Nature 2023, 620, 47–60. 10.1038/s41586-023-06221-2. [DOI] [PubMed] [Google Scholar]
- Trinh T. H.; Wu Y.; Le Q. V.; He H.; Luong T. Solving Olympiad Geometry without Human Demonstrations. Nature 2024, 625, 476–482. 10.1038/s41586-023-06747-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davies A.; Veličković P.; Buesing L.; Blackwell S.; Zheng D.; Tomašev N.; Tanburn R.; Battaglia P.; Blundell C.; Juhász A.; et al. Advancing Mathematics by Guiding Human Intuition with AI. Nature 2021, 600, 70–74. 10.1038/s41586-021-04086-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beeker Th. W.; During J.; Den Hertog A. Artificial Touch in a Hand-Prosthesis. Med. Biol. Eng. 1967, 5, 47–49. 10.1007/BF02478841. [DOI] [PubMed] [Google Scholar]
- Lumelsky V. J.Sensing, Intelligence, Motion: How Robots and Humans Move in an Unstructured World; John Wiley & Sons, Ltd, 2005. 10.1002/0471738204.ch8. [DOI] [Google Scholar]
- Someya T.; Sekitani T.; Iba S.; Kato Y.; Kawaguchi H.; Sakurai T. A Large-Area, Flexible Pressure Sensor Matrix with Organic Field-Effect Transistors for Artificial Skin Applications. Proc. Natl. Acad. Sci. U. S. A. 2004, 101, 9966–9970. 10.1073/pnas.0401918101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mannsfeld S. C. B.; Tee B. C.-K.; Stoltenberg R. M.; Chen C. V. H.-H.; Barman S.; Muir B. V. O.; Sokolov A. N.; Reese C.; Bao Z. Highly Sensitive Flexible Pressure Sensors with Microstructured Rubber Dielectric Layers. Nat. Mater. 2010, 9, 859–864. 10.1038/nmat2834. [DOI] [PubMed] [Google Scholar]
- Kim D.-H.; Lu N.; Ma R.; Kim Y.-S.; Kim R.-H.; Wang S.; Wu J.; Won S. M.; Tao H.; Islam A.; et al. Epidermal Electronics. Science 2011, 333, 838–843. 10.1126/science.1206157. [DOI] [PubMed] [Google Scholar]
- Lipomi D. J.; Vosgueritchian M.; Tee B. C.-K.; Hellstrom S. L.; Lee J. A.; Fox C. H.; Bao Z. Skin-like Pressure and Strain Sensors Based on Transparent Elastic Films of Carbon Nanotubes. Nat. Nanotechnol. 2011, 6, 788–792. 10.1038/nnano.2011.184. [DOI] [PubMed] [Google Scholar]
- Wang Y.; Yang T.; Lao J.; Zhang R.; Zhang Y.; Zhu M.; Li X.; Zang X.; Wang K.; Yu W.; et al. Ultra-Sensitive Graphene Strain Sensor for Sound Signal Acquisition and Recognition. Nano Res. 2015, 8, 1627–1636. 10.1007/s12274-014-0652-3. [DOI] [Google Scholar]
- Tee B. C.-K.; Chortos A.; Berndt A.; Nguyen A. K.; Tom A.; McGuire A.; Lin Z. C.; Tien K.; Bae W.-G.; Wang H.; et al. A Skin-Inspired Organic Digital Mechanoreceptor. Science 2015, 350, 313–316. 10.1126/science.aaa9306. [DOI] [PubMed] [Google Scholar]
- Gao W.; Emaminejad S.; Nyein H. Y. Y.; Challa S.; Chen K.; Peck A.; Fahad H. M.; Ota H.; Shiraki H.; Kiriya D.; et al. Fully Integrated Wearable Sensor Arrays for Multiplexed in Situ Perspiration Analysis. Nature 2016, 529, 509–514. 10.1038/nature16521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yi Z.; Zhang Y.; Peters J. Bioinspired Tactile Sensor for Surface Roughness Discrimination. Sens. Actuators Phys. 2017, 255, 46–53. 10.1016/j.sna.2016.12.021. [DOI] [Google Scholar]
- Son D.; Kang J.; Vardoulis O.; Kim Y.; Matsuhisa N.; Oh J. Y.; To J. W.; Mun J.; Katsumata T.; Liu Y.; et al. An Integrated Self-Healable Electronic Skin System Fabricated via Dynamic Reconstruction of a Nanostructured Conducting Network. Nat. Nanotechnol. 2018, 13, 1057–1065. 10.1038/s41565-018-0244-6. [DOI] [PubMed] [Google Scholar]
- Lee W. W.; Tan Y. J.; Yao H.; Li S.; See H. H.; Hon M.; Ng K. A.; Xiong B.; Ho J. S.; Tee B. C. K. A Neuro-Inspired Artificial Peripheral Nervous System for Scalable Electronic Skins. Sci. Robot. 2019, 4, eaax2198 10.1126/scirobotics.aax2198. [DOI] [PubMed] [Google Scholar]
- Sundaram S.; Kellnhofer P.; Li Y.; Zhu J.-Y.; Torralba A.; Matusik W. Learning the Signatures of the Human Grasp Using a Scalable Tactile Glove. Nature 2019, 569, 698–702. 10.1038/s41586-019-1234-z. [DOI] [PubMed] [Google Scholar]
- Li G.; Liu S.; Wang L.; Zhu R.. Skin-Inspired Quadruple Tactile Sensors Integrated on a Robot Hand Enable Object Recognition. Sci. Robot. 2020, 5. 10.1126/scirobotics.abc8134. [DOI] [PubMed] [Google Scholar]
- Ozer E.; Kufel J.; Myers J.; Biggs J.; Brown G.; Rana A.; Sou A.; Ramsdale C.; White S. A Hardwired Machine Learning Processing Engine Fabricated with Submicron Metal-Oxide Thin-Film Transistors on a Flexible Substrate. Nat. Electron. 2020, 3, 419–425. 10.1038/s41928-020-0437-5. [DOI] [Google Scholar]
- Zhu M.; Sun Z.; Zhang Z.; Shi Q.; He T.; Liu H.; Chen T.; Lee C. Haptic-Feedback Smart Glove as a Creative Human-Machine Interface (HMI) for Virtual/Augmented Reality Applications. Sci. Adv. 2020, 6, eaaz8693 10.1126/sciadv.aaz8693. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yao H.; Yang W.; Cheng W.; Tan Y. J.; See H. H.; Li S.; Ali H. P. A.; Lim B. Z. H.; Liu Z.; Tee B. C. K. Near-Hysteresis-Free Soft Tactile Electronic Skins for Wearables and Reliable Machine Learning. Proc. Natl. Acad. Sci. U. S. A. 2020, 117, 25352–25359. 10.1073/pnas.2010989117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang M.; Yan Z.; Wang T.; Cai P.; Gao S.; Zeng Y.; Wan C.; Wang H.; Pan L.; Yu J.; et al. Gesture Recognition Using a Bioinspired Learning Architecture That Integrates Visual Data with Somatosensory Data from Stretchable Sensors. Nat. Electron. 2020, 3, 563–570. 10.1038/s41928-020-0422-z. [DOI] [Google Scholar]
- Zhou Z.; Chen K.; Li X.; Zhang S.; Wu Y.; Zhou Y.; Meng K.; Sun C.; He Q.; Fan W.; et al. Sign-to-Speech Translation Using Machine-Learning-Assisted Stretchable Sensor Arrays. Nat. Electron. 2020, 3, 571–578. 10.1038/s41928-020-0428-6. [DOI] [Google Scholar]
- Lee J. H.; Heo J. S.; Kim Y.-J.; Eom J.; Jung H. J.; Kim J.-W.; Kim I.; Park H.-H.; Mo H. S.; Kim Y.-H.; et al. A Behavior-Learned Cross-Reactive Sensor Matrix for Intelligent Skin Perception. Adv. Mater. 2020, 32, 2000969 10.1002/adma.202000969. [DOI] [PubMed] [Google Scholar]
- Chun S.; Kim J.-S.; Yoo Y.; Choi Y.; Jung S. J.; Jang D.; Lee G.; Song K.-I.; Nam K. S.; Youn I.; et al. An Artificial Neural Tactile Sensing System. Nat. Electron. 2021, 4, 429–438. 10.1038/s41928-021-00585-x. [DOI] [Google Scholar]
- Wang H. S.; Hong S. K.; Han J. H.; Jung Y. H.; Jeong H. K.; Im T. H.; Jeong C. K.; Lee B.-Y.; Kim G.; Yoo C. D.; et al. Biomimetic and Flexible Piezoelectric Mobile Acoustic Sensors with Multiresonant Ultrathin Structures for Machine Learning Biometrics. Sci. Adv. 2021, 7, eabe5683 10.1126/sciadv.abe5683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Biggs J.; Myers J.; Kufel J.; Ozer E.; Craske S.; Sou A.; Ramsdale C.; Williamson K.; Price R.; White S. A Natively Flexible 32-Bit Arm Microprocessor. Nature 2021, 595, 532–536. 10.1038/s41586-021-03625-w. [DOI] [PubMed] [Google Scholar]
- Luo Y.; Li Y.; Sharma P.; Shou W.; Wu K.; Foshey M.; Li B.; Palacios T.; Torralba A.; Matusik W. Learning Human–Environment Interactions Using Conformal Tactile Textiles. Nat. Electron. 2021, 4, 193–201. 10.1038/s41928-021-00558-0. [DOI] [Google Scholar]
- Fang Y.; Zou Y.; Xu J.; Chen G.; Zhou Y.; Deng W.; Zhao X.; Roustaei M.; Hsiai T. K.; Chen J. Ambulatory Cardiovascular Monitoring Via a Machine-Learning-Assisted Textile Triboelectric Sensor. Adv. Mater. 2021, 33, 2104178 10.1002/adma.202104178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim K. K.; Kim M.; Pyun K.; Kim J.; Min J.; Koh S.; Root S. E.; Kim J.; Nguyen B.-N. T.; Nishio Y.; et al. A Substrate-Less Nanomesh Receptor with Meta-Learning for Rapid Hand Task Recognition. Nat. Electron. 2022, 6, 64–75. 10.1038/s41928-022-00888-7. [DOI] [Google Scholar]
- Sun H.; Martius G. Guiding the Design of Superresolution Tactile Skins with Taxel Value Isolines Theory. Sci. Robot. 2022, 7, eabm0608 10.1126/scirobotics.abm0608. [DOI] [PubMed] [Google Scholar]
- Massari L.; Fransvea G.; D’Abbraccio J.; Filosa M.; Terruso G.; Aliperta A.; D’Alesio G.; Zaltieri M.; Schena E.; Palermo E.; et al. Functional Mimicry of Ruffini Receptors with Fibre Bragg Gratings and Deep Neural Networks Enables a Bio-Inspired Large-Area Tactile-Sensitive Skin. Nat. Mach. Intell. 2022, 4, 425–435. 10.1038/s42256-022-00487-3. [DOI] [Google Scholar]
- Qu X.; Liu Z.; Tan P.; Wang C.; Liu Y.; Feng H.; Luo D.; Li Z.; Lin Wang Z. Artificial Tactile Perception Smart Finger for Material Identification Based on Triboelectric Sensing. Sci. Adv. 2022, 8, eabq2521 10.1126/sciadv.abq2521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang H.; Li J.; Lim K. Z.; Pan C.; Van Truong T.; Wang Q.; Li K.; Li S.; Xiao X.; Ding M.; et al. Automatic Strain Sensor Design via Active Learning and Data Augmentation for Soft Machines. Nat. Mach. Intell. 2022, 4, 84–94. 10.1038/s42256-021-00434-8. [DOI] [Google Scholar]
- Kim T.; Kim J.; You I.; Oh J.; Kim S.-P.; Jeong U. Dynamic Tactility by Position-Encoded Spike Spectrum. Sci. Robot. 2022, 7, eabl5761 10.1126/scirobotics.abl5761. [DOI] [PubMed] [Google Scholar]
- Liu F.; Deswal S.; Christou A.; Baghini M. S.; Chirila R.; Shakthivel D.; Chakraborty M.; Dahiya R. Printed Synaptic Transistor-Based Electronic Skin for Robots to Feel and Learn. Sci. Robot. 2022, 7, eabl7286 10.1126/scirobotics.abl7286. [DOI] [PubMed] [Google Scholar]
- Hu D.; Giorgio-Serchi F.; Zhang S.; Yang Y. Stretchable E-Skin and Transformer Enable High-Resolution Morphological Reconstruction for Soft Robots. Nat. Mach. Intell. 2023, 5, 261–272. 10.1038/s42256-023-00622-8. [DOI] [Google Scholar]
- Cho H.; Lee I.; Jang J.; Kim J.-H.; Lee H.; Park S.; Wang G. Real-Time Finger Motion Recognition Using Skin-Conformable Electronics. Nat. Electron. 2023, 6, 619–629. 10.1038/s41928-023-01012-z. [DOI] [Google Scholar]
- Gong S.; Zhang X.; Nguyen X. A.; Shi Q.; Lin F.; Chauhan S.; Ge Z.; Cheng W. Hierarchically Resistive Skins as Specific and Multimetric On-Throat Wearable Biosensors. Nat. Nanotechnol. 2023, 18, 889–897. 10.1038/s41565-023-01383-6. [DOI] [PubMed] [Google Scholar]
- Liu T.; Gou G.-Y.; Gao F.; Yao P.; Wu H.; Guo Y.; Yin M.; Yang J.; Wen T.; Zhao M.; et al. Multichannel Flexible Pulse Perception Array for Intelligent Disease Diagnosis System. ACS Nano 2023, 17, 5673–5685. 10.1021/acsnano.2c11897. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zheng X. T.; Yang Z.; Sutarlie L.; Thangaveloo M.; Yu Y.; Salleh N. A. B. M.; Chin J. S.; Xiong Z.; Becker D. L.; Loh X. J.; et al. Battery-Free and AI-Enabled Multiplexed Sensor Patches for Wound Monitoring. Sci. Adv. 2023, 9, eadg6670 10.1126/sciadv.adg6670. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang W.; Jiang Y.; Zhong D.; Zhang Z.; Choudhury S.; Lai J.-C.; Gong H.; Niu S.; Yan X.; Zheng Y.; et al. Neuromorphic Sensorimotor Loop Embodied by Monolithically Integrated, Low-Voltage, Soft e-Skin. Science 2023, 380, 735–742. 10.1126/science.ade0086. [DOI] [PubMed] [Google Scholar]
- Cristianini N.; Shawe-Taylor J.. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, 2000. 10.1017/CBO9780511801389. [DOI] [Google Scholar]
- Krzywinski M.; Altman N. Classification and Regression Trees. Nat. Methods 2017, 14, 757–758. 10.1038/nmeth.4370. [DOI] [Google Scholar]
- Ismail Fawaz H.; Forestier G.; Weber J.; Idoumghar L.; Muller P.-A. Deep Learning for Time Series Classification: A Review. Data Min. Knowl. Discovery 2019, 33, 917–963. 10.1007/s10618-019-00619-1. [DOI] [Google Scholar]
- Li Z.; Liu F.; Yang W.; Peng S.; Zhou J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 6999–7019. 10.1109/TNNLS.2021.3084827. [DOI] [PubMed] [Google Scholar]
- Han K.; Wang Y.; Chen H.; Chen X.; Guo J.; Liu Z.; Tang Y.; Xiao A.; Xu C.; Xu Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. 10.1109/TPAMI.2022.3152247. [DOI] [PubMed] [Google Scholar]
- Chaudhari S.; Mithal V.; Polatkan G.; Ramanath R. An Attentive Survey of Attention Models. ACM Trans. Intell. Syst. Technol. 2021, 12, 1–32. 10.1145/3465055.34336375 [DOI] [Google Scholar]
- Liu Y.; Pharr M.; Salvatore G. A. Lab-on-Skin: A Review of Flexible and Stretchable Electronics for Wearable Health Monitoring. ACS Nano 2017, 11, 9614–9635. 10.1021/acsnano.7b04898. [DOI] [PubMed] [Google Scholar]
- Lim H.-R.; Kim H. S.; Qazi R.; Kwon Y.-T.; Jeong J.-W.; Yeo W.-H. Advanced Soft Materials, Sensor Integrations, and Applications of Wearable Flexible Hybrid Electronics in Healthcare, Energy, and Environment. Adv. Mater. 2020, 32, 1901924 10.1002/adma.201901924. [DOI] [PubMed] [Google Scholar]
- Chortos A.; Liu J.; Bao Z. Pursuing Prosthetic Electronic Skin. Nat. Mater. 2016, 15, 937–950. 10.1038/nmat4671. [DOI] [PubMed] [Google Scholar]
- Liu F.; Deswal S.; Christou A.; Sandamirskaya Y.; Kaboli M.; Dahiya R. Neuro-Inspired Electronic Skin for Robots. Sci. Robot. 2022, 7, eabl7344 10.1126/scirobotics.abl7344. [DOI] [PubMed] [Google Scholar]
- Guo H.; Tan Y. J.; Chen G.; Wang Z.; Susanto G. J.; See H. H.; Yang Z.; Lim Z. W.; Yang L.; Tee B. C. K. Artificially Innervated Self-Healing Foams as Synthetic Piezo-Impedance Sensor Skins. Nat. Commun. 2020, 11, 5747. 10.1038/s41467-020-19531-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao F.-L.; Min P.; Gao X.-Z.; Li C.; Zhang T.; Yu Z.-Z.; Li X. Integrated Temperature and Pressure Dual-Mode Sensors Based on Elastic PDMS Foams Decorated with Thermoelectric PEDOT:PSS and Carbon Nanotubes for Human Energy Harvesting and Electronic-Skin. J. Mater. Chem. A 2022, 10, 18256–18266. 10.1039/D2TA04862K. [DOI] [Google Scholar]
- Zhang L.; Kumar K. S.; He H.; Cai C. J.; He X.; Gao H.; Yue S.; Li C.; Seet R. C.-S.; Ren H.; et al. Fully Organic Compliant Dry Electrodes Self-Adhesive to Skin for Long-Term Motion-Robust Epidermal Biopotential Monitoring. Nat. Commun. 2020, 11, 4683. 10.1038/s41467-020-18503-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhao Y.; Tan Y. J.; Yang W.; Ling S.; Yang Z.; Teo J. T.; See H. H.; Lee D. K. H.; Lu D.; Li S.; et al. Scaling Metal-Elastomer Composites toward Stretchable Multi-Helical Conductive Paths for Robust Responsive Wearable Health Devices. Adv. Healthc. Mater. 2021, 10, 2100221 10.1002/adhm.202100221. [DOI] [PubMed] [Google Scholar]
- Xiang S.; Tang J.; Yang L.; Guo Y.; Zhao Z.; Zhang W. Deep Learning-Enabled Real-Time Personal Handwriting Electronic Skin with Dynamic Thermoregulating Ability. Npj Flex. Electron. 2022, 6, 59. 10.1038/s41528-022-00195-3. [DOI] [Google Scholar]
- Lee Y.; Chung J. W.; Lee G. H.; Kang H.; Kim J.-Y.; Bae C.; Yoo H.; Jeong S.; Cho H.; Kang S.-G.; et al. Standalone Real-Time Health Monitoring Patch Based on a Stretchable Organic Optoelectronic System. Sci. Adv. 2021, 7, eabg9180 10.1126/sciadv.abg9180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jung Y. H.; Yoo J.-Y.; Vázquez-Guardado A.; Kim J.-H.; Kim J.-T.; Luan H.; Park M.; Lim J.; Shin H.-S.; Su C.-J.; et al. A Wireless Haptic Interface for Programmable Patterns of Touch across Large Areas of the Skin. Nat. Electron. 2022, 5, 374–385. 10.1038/s41928-022-00765-3. [DOI] [Google Scholar]
- Hua Q.; Sun J.; Liu H.; Bao R.; Yu R.; Zhai J.; Pan C.; Wang Z. L. Skin-Inspired Highly Stretchable and Conformable Matrix Networks for Multifunctional Sensing. Nat. Commun. 2018, 9, 244. 10.1038/s41467-017-02685-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhu M.; Li J.; Yu J.; Li Z.; Ding B. Superstable and Intrinsically Self-Healing Fibrous Membrane with Bionic Confined Protective Structure for Breathable Electronic Skin. Angew. Chem., Int. Ed. 2022, 61, e202200226 10.1002/anie.202200226. [DOI] [PubMed] [Google Scholar]
- Zheng Y.; Li Y.; Zhao Y.; Lin X.; Luo S.; Wang Y.; Li L.; Teng C.; Wang X.; Xue G.; et al. Ultrathin and Highly Breathable Electronic Tattoo for Sensing Multiple Signals Imperceptibly on the Skin. Nano Energy 2023, 107, 108092 10.1016/j.nanoen.2022.108092. [DOI] [Google Scholar]
- Kim Y.; Suh J. M.; Shin J.; Liu Y.; Yeon H.; Qiao K.; Kum H. S.; Kim C.; Lee H. E.; Choi C.; et al. Chip-Less Wireless Electronic Skins by Remote Epitaxial Freestanding Compound Semiconductors. Science 2022, 377, 859–864. 10.1126/science.abn7325. [DOI] [PubMed] [Google Scholar]
- Yu Y.; Li J.; Solomon S. A.; Min J.; Tu J.; Guo W.; Xu C.; Song Y.; Gao W. All-Printed Soft Human-Machine Interface for Robotic Physicochemical Sensing. Sci. Robot. 2022, 7, eabn0495 10.1126/scirobotics.abn0495. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mountcastle V. The Columnar Organization of the Neocortex. Brain 1997, 120, 701–722. 10.1093/brain/120.4.701. [DOI] [PubMed] [Google Scholar]
- Julius D. TRP Channels and Pain. Annu. Rev. Cell Dev. Biol. 2013, 29, 355–384. 10.1146/annurev-cellbio-101011-155833. [DOI] [PubMed] [Google Scholar]
- Liao M.; Cao E.; Julius D.; Cheng Y. Structure of the TRPV1 Ion Channel Determined by Electron Cryo-Microscopy. Nature 2013, 504, 107–112. 10.1038/nature12822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coste B.; Mathur J.; Schmidt M.; Earley T. J.; Ranade S.; Petrus M. J.; Dubin A. E.; Patapoutian A. Piezo1 and Piezo2 Are Essential Components of Distinct Mechanically Activated Cation Channels. Science 2010, 330, 55–60. 10.1126/science.1193270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L.; Zhou H.; Zhang M.; Liu W.; Deng T.; Zhao Q.; Li Y.; Lei J.; Li X.; Xiao B. Structure and Mechanogating of the Mammalian Tactile Channel PIEZO2. Nature 2019, 573, 225–229. 10.1038/s41586-019-1505-8. [DOI] [PubMed] [Google Scholar]
- Yang Y.; Yang Y.; Cao Y.; Wang X.; Chen Y.; Liu H.; Gao Y.; Wang J.; Liu C.; Wang W.; et al. Anti-Freezing, Resilient and Tough Hydrogels for Sensitive and Large-Range Strain and Pressure Sensors. Chem. Eng. J. 2021, 403, 126431 10.1016/j.cej.2020.126431. [DOI] [Google Scholar]
- Lee Y.; Myoung J.; Cho S.; Park J.; Kim J.; Lee H.; Lee Y.; Lee S.; Baig C.; Ko H. Bioinspired Gradient Conductivity and Stiffness for Ultrasensitive Electronic Skins. ACS Nano 2021, 15, 1795–1804. 10.1021/acsnano.0c09581. [DOI] [PubMed] [Google Scholar]
- Ma Z.; Xiang X.; Shao L.; Zhang Y.; Gu J. Multifunctional Wearable Silver Nanowire Decorated Leather Nanocomposites for Joule Heating, Electromagnetic Interference Shielding and Piezoresistive Sensing. Angew. Chem., Int. Ed. 2022, 61, e202200705 10.1002/anie.202200705. [DOI] [PubMed] [Google Scholar]
- Gao Z.; Lou Z.; Han W.; Shen G. A Self-Healable Bifunctional Electronic Skin. ACS Appl. Mater. Interfaces 2020, 12, 24339–24347. 10.1021/acsami.0c05119. [DOI] [PubMed] [Google Scholar]
- Lee S.; Franklin S.; Hassani F. A.; Yokota T.; Nayeem M. O. G.; Wang Y.; Leib R.; Cheng G.; Franklin D. W.; Someya T. Nanomesh Pressure Sensor for Monitoring Finger Manipulation without Sensory Interference. Science 2020, 370, 966–970. 10.1126/science.abc9735. [DOI] [PubMed] [Google Scholar]
- Wang X.; Song W.-Z.; You M.-H.; Zhang J.; Yu M.; Fan Z.; Ramakrishna S.; Long Y.-Z. Bionic Single-Electrode Electronic Skin Unit Based on Piezoelectric Nanogenerator. ACS Nano 2018, 12, 8588–8596. 10.1021/acsnano.8b04244. [DOI] [PubMed] [Google Scholar]
- Zhu M.; Lou M.; Abdalla I.; Yu J.; Li Z.; Ding B. Highly Shape Adaptive Fiber Based Electronic Skin for Sensitive Joint Motion Monitoring and Tactile Sensing. Nano Energy 2020, 69, 104429 10.1016/j.nanoen.2019.104429. [DOI] [Google Scholar]
- Wang X.; Zhang Y.; Zhang X.; Huo Z.; Li X.; Que M.; Peng Z.; Wang H.; Pan C. A Highly Stretchable Transparent Self-Powered Triboelectric Tactile Sensor with Metallized Nanofibers for Wearable Electronics. Adv. Mater. 2018, 30, 1706738 10.1002/adma.201706738. [DOI] [PubMed] [Google Scholar]
- Peng X.; Dong K.; Ye C.; Jiang Y.; Zhai S.; Cheng R.; Liu D.; Gao X.; Wang J.; Wang Z. L. A Breathable, Biodegradable, Antibacterial, and Self-Powered Electronic Skin Based on All-Nanofiber Triboelectric Nanogenerators. Sci. Adv. 2020, 6, eaba9624 10.1126/sciadv.aba9624. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gogurla N.; Kim S. Self-Powered and Imperceptible Electronic Tattoos Based on Silk Protein Nanofiber and Carbon Nanotubes for Human–Machine Interfaces. Adv. Energy Mater. 2021, 11, 2100801 10.1002/aenm.202100801. [DOI] [Google Scholar]
- An J.; Chen P.; Wang Z.; Berbille A.; Pang H.; Jiang Y.; Jiang T.; Wang Z. L. Biomimetic Hairy Whiskers for Robotic Skin Tactility. Adv. Mater. 2021, 33, 2101891 10.1002/adma.202101891. [DOI] [PubMed] [Google Scholar]
- Song Z.; Yin J.; Wang Z.; Lu C.; Yang Z.; Zhao Z.; Lin Z.; Wang J.; Wu C.; Cheng J.; et al. A Flexible Triboelectric Tactile Sensor for Simultaneous Material and Texture Recognition. Nano Energy 2022, 93, 106798 10.1016/j.nanoen.2021.106798. [DOI] [Google Scholar]
- Park J.; Lee Y.; Barbee M. H.; Cho S.; Cho S.; Shanker R.; Kim J.; Myoung J.; Kim M. P.; Baig C.; et al. A Hierarchical Nanoparticle-in-Micropore Architecture for Enhanced Mechanosensitivity and Stretchability in Mechanochromic Electronic Skins. Adv. Mater. 2019, 31, 1808148 10.1002/adma.201808148. [DOI] [PubMed] [Google Scholar]
- Li J.; Yuan Z.; Han X.; Wang C.; Huo Z.; Lu Q.; Xiong M.; Ma X.; Gao W.; Pan C. Biologically Inspired Stretchable, Multifunctional, and 3D Electronic Skin by Strain Visualization and Triboelectric Pressure Sensing. Small Sci. 2022, 2, 2100083 10.1002/smsc.202100083. [DOI] [Google Scholar]
- Zhang H.; Chen H.; Lee J.-H.; Kim E.; Chan K.-Y.; Venkatesan H.; Shen X.; Yang J.; Kim J.-K. Mechanochromic Optical/Electrical Skin for Ultrasensitive Dual-Signal Sensing. ACS Nano 2023, 17, 5921–5934. 10.1021/acsnano.3c00015. [DOI] [PubMed] [Google Scholar]
- Tang W.; Sun Q.; Wang Z. L. Self-Powered Sensing in Wearable Electronics—A Paradigm Shift Technology. Chem. Rev. 2023, 123, 12105–12134. 10.1021/acs.chemrev.3c00305. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Takei K.; Takahashi T.; Ho J. C.; Ko H.; Gillies A. G.; Leu P. W.; Fearing R. S.; Javey A. Nanowire Active-Matrix Circuitry for Low-Voltage Macroscale Artificial Skin. Nat. Mater. 2010, 9, 821–826. 10.1038/nmat2835. [DOI] [PubMed] [Google Scholar]
- Wu W.; Wen X.; Wang Z. L. Taxel-Addressable Matrix of Vertical-Nanowire Piezotronic Transistors for Active and Adaptive Tactile Imaging. Science 2013, 340, 952–957. 10.1126/science.1234855. [DOI] [PubMed] [Google Scholar]
- Yeom C.; Chen K.; Kiriya D.; Yu Z.; Cho G.; Javey A. Large-Area Compliant Tactile Sensors Using Printed Carbon Nanotube Active-Matrix Backplanes. Adv. Mater. 2015, 27, 1561–1566. 10.1002/adma.201404850. [DOI] [PubMed] [Google Scholar]
- Schwartz G.; Tee B. C.-K.; Mei J.; Appleton A. L.; Kim D. H.; Wang H.; Bao Z. Flexible Polymer Transistors with High Pressure Sensitivity for Application in Electronic Skin and Health Monitoring. Nat. Commun. 2013, 4, 1859. 10.1038/ncomms2832. [DOI] [PubMed] [Google Scholar]
- Yao H.; Sun T.; Chiam J. S.; Tan M.; Ho K. Y.; Liu Z.; Tee B. C. K. Augmented Reality Interfaces Using Virtual Customization of Microstructured Electronic Skin Sensor Sensitivity Performances. Adv. Funct. Mater. 2021, 31, 2008650 10.1002/adfm.202008650. [DOI] [Google Scholar]
- Yang J. C.; Kim J.-O.; Oh J.; Kwon S. Y.; Sim J. Y.; Kim D. W.; Choi H. B.; Park S. Microstructured Porous Pyramid-Based Ultrahigh Sensitive Pressure Sensor Insensitive to Strain and Temperature. ACS Appl. Mater. Interfaces 2019, 11, 19472–19480. 10.1021/acsami.9b03261. [DOI] [PubMed] [Google Scholar]
- Chen M.; Luo W.; Xu Z.; Zhang X.; Xie B.; Wang G.; Han M. An Ultrahigh Resolution Pressure Sensor Based on Percolative Metal Nanoparticle Arrays. Nat. Commun. 2019, 10, 4024. 10.1038/s41467-019-12030-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yao H.; Li P.; Cheng W.; Yang W.; Yang Z.; Ali H. P. A.; Guo H.; Tee B. C. K. Environment-Resilient Graphene Vibrotactile Sensitive Sensors for Machine Intelligence. ACS Mater. Lett. 2020, 2, 986–992. 10.1021/acsmaterialslett.0c00160. [DOI] [Google Scholar]
- Wang H. L.; Kuang S. Y.; Li H. Y.; Wang Z. L.; Zhu G. Large-Area Integrated Triboelectric Sensor Array for Wireless Static and Dynamic Pressure Detection and Mapping. Small 2020, 16, 1906352 10.1002/smll.201906352. [DOI] [PubMed] [Google Scholar]
- Kar E.; Ghosh P.; Pratihar S.; Tavakoli M.; Sen S. SiO2 Nanoparticles Incorporated Poly(Vinylidene) Fluoride Composite for Efficient Piezoelectric Energy Harvesting and Dual-Mode Sensing. Energy Technol. 2023, 11, 2201143 10.1002/ente.202201143. [DOI] [Google Scholar]
- Oh Y. S.; Kim J.-H.; Xie Z.; Cho S.; Han H.; Jeon S. W.; Park M.; Namkoong M.; Avila R.; Song Z.; et al. Battery-Free, Wireless Soft Sensors for Continuous Multi-Site Measurements of Pressure and Temperature from Patients at Risk for Pressure Injuries. Nat. Commun. 2021, 12, 5008. 10.1038/s41467-021-25324-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shin J.; Jeong B.; Kim J.; Nam V. B.; Yoon Y.; Jung J.; Hong S.; Lee H.; Eom H.; Yeo J.; et al. Sensitive Wearable Temperature Sensor with Seamless Monolithic Integration. Adv. Mater. 2020, 32, 1905527 10.1002/adma.201905527. [DOI] [PubMed] [Google Scholar]
- Neto J.; Chirila R.; Dahiya A. S.; Christou A.; Shakthivel D.; Dahiya R. Skin-Inspired Thermoreceptors-Based Electronic Skin for Biomimicking Thermal Pain Reflexes. Adv. Sci. 2022, 9, 2201525 10.1002/advs.202201525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang F.; Zang Y.; Huang D.; Di C.; Zhu D. Flexible and Self-Powered Temperature–Pressure Dual-Parameter Sensors Using Microstructure-Frame-Supported Organic Thermoelectric Materials. Nat. Commun. 2015, 6, 8356. 10.1038/ncomms9356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jia Y.; Jiang Q.; Sun H.; Liu P.; Hu D.; Pei Y.; Liu W.; Crispin X.; Fabiano S.; Ma Y.; et al. Wearable Thermoelectric Materials and Devices for Self-Powered Electronic Systems. Adv. Mater. 2021, 33, 2102990 10.1002/adma.202102990. [DOI] [PubMed] [Google Scholar]
- Chen Y.; Lei H.; Gao Z.; Liu J.; Zhang F.; Wen Z.; Sun X. Energy Autonomous Electronic Skin with Direct Temperature-Pressure Perception. Nano Energy 2022, 98, 107273 10.1016/j.nanoen.2022.107273. [DOI] [Google Scholar]
- Shin Y.-E.; Park Y.-J.; Ghosh S. K.; Lee Y.; Park J.; Ko H. Ultrasensitive Multimodal Tactile Sensors with Skin-Inspired Microstructures through Localized Ferroelectric Polarization. Adv. Sci. 2022, 9, 2105423 10.1002/advs.202105423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hong S. Y.; Kim M. S.; Park H.; Jin S. W.; Jeong Y. R.; Kim J. W.; Lee Y. H.; Sun L.; Zi G.; Ha J. S. High-Sensitivity, Skin-Attachable, and Stretchable Array of Thermo-Responsive Suspended Gate Field-Effect Transistors with Thermochromic Display. Adv. Funct. Mater. 2019, 29, 1807679 10.1002/adfm.201807679. [DOI] [Google Scholar]
- Xiong Y.; Han J.; Wang Y.; Wang Z. L.; Sun Q. Emerging Iontronic Sensing: Materials, Mechanisms, and Applications. Research 2022, 2022, 9867378. 10.34133/2022/9867378. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y.; Zhao C.; Xiong Y.; Yang J.; Jiao H.; Zhang Q.; Cao R.; Wang Z. L.; Sun Q. Versatile Ion-Gel Fibrous Membrane for Energy-Harvesting Iontronic Skin. Adv. Funct. Mater. 2023, 33, 2303723 10.1002/adfm.202303723. [DOI] [Google Scholar]
- Wu J.; Wu Z.; Wei Y.; Ding H.; Huang W.; Gui X.; Shi W.; Shen Y.; Tao K.; Xie X. Ultrasensitive and Stretchable Temperature Sensors Based on Thermally Stable and Self-Healing Organohydrogels. ACS Appl. Mater. Interfaces 2020, 12, 19069–19079. 10.1021/acsami.0c04359. [DOI] [PubMed] [Google Scholar]
- Takei K.; Gao W.; Wang C.; Javey A. Physical and Chemical Sensing With Electronic Skin. Proc. IEEE 2019, 107, 2155–2167. 10.1109/JPROC.2019.2907317. [DOI] [Google Scholar]
- Koh A.; Kang D.; Xue Y.; Lee S.; Pielak R. M.; Kim J.; Hwang T.; Min S.; Banks A.; Bastien P.; et al. A Soft, Wearable Microfluidic Device for the Capture, Storage, and Colorimetric Sensing of Sweat. Sci. Transl. Med. 2016, 8, 366ra165–366ra165. 10.1126/scitranslmed.aaf2593. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bandodkar A. J.; Gutruf P.; Choi J.; Lee K.; Sekine Y.; Reeder J. T.; Jeang W. J.; Aranyosi A. J.; Lee S. P.; Model J. B.; et al. Battery-Free, Skin-Interfaced Microfluidic/Electronic Systems for Simultaneous Electrochemical, Colorimetric, and Volumetric Analysis of Sweat. Sci. Adv. 2019, 5, eaav3294 10.1126/sciadv.aav3294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu Y.; Nassar J.; Xu C.; Min J.; Yang Y.; Dai A.; Doshi R.; Huang A.; Song Y.; Gehlhar R.; et al. Biofuel-Powered Soft Electronic Skin with Multiplexed and Wireless Sensing for Human-Machine Interfaces. Sci. Robot. 2020, 5, eaaz7946 10.1126/scirobotics.aaz7946. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manjakkal L.; Dang W.; Yogeswaran N.; Dahiya R. Textile-Based Potentiometric Electrochemical pH Sensor for Wearable Applications. Biosensors 2019, 9, 14. 10.3390/bios9010014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin H.; Tan J.; Zhu J.; Lin S.; Zhao Y.; Yu W.; Hojaiji H.; Wang B.; Yang S.; Cheng X.; et al. A Programmable Epidermal Microfluidic Valving System for Wearable Biofluid Management and Contextual Biomarker Analysis. Nat. Commun. 2020, 11, 4405. 10.1038/s41467-020-18238-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liao C.; Mak C.; Zhang M.; Chan H. L. W.; Yan F. Flexible Organic Electrochemical Transistors for Highly Selective Enzyme Biosensors and Used for Saliva Testing. Adv. Mater. 2015, 27, 676–681. 10.1002/adma.201404378. [DOI] [PubMed] [Google Scholar]
- Li Y.; Wang N.; Yang A.; Ling H.; Yan F. Biomimicking Stretchable Organic Electrochemical Transistor. Adv. Electron. Mater. 2019, 5, 1900566 10.1002/aelm.201900566. [DOI] [Google Scholar]
- Demuru S.; Huang C.-H.; Parvez K.; Worsley R.; Mattana G.; Piro B.; Noël V.; Casiraghi C.; Briand D. All-Inkjet-Printed Graphene-Gated Organic Electrochemical Transistors on Polymeric Foil as Highly Sensitive Enzymatic Biosensors. ACS Appl. Nano Mater. 2022, 5, 1664–1673. 10.1021/acsanm.1c04434. [DOI] [Google Scholar]
- Nakata S.; Arie T.; Akita S.; Takei K. Wearable, Flexible, and Multifunctional Healthcare Device with an ISFET Chemical Sensor for Simultaneous Sweat pH and Skin Temperature Monitoring. ACS Sens. 2017, 2, 443–448. 10.1021/acssensors.7b00047. [DOI] [PubMed] [Google Scholar]
- Cai S.; Xu C.; Jiang D.; Yuan M.; Zhang Q.; Li Z.; Wang Y. Air-Permeable Electrode for Highly Sensitive and Noninvasive Glucose Monitoring Enabled by Graphene Fiber Fabrics. Nano Energy 2022, 93, 106904 10.1016/j.nanoen.2021.106904. [DOI] [Google Scholar]
- Wu H.; Yang G.; Zhu K.; Liu S.; Guo W.; Jiang Z.; Li Z. Materials, Devices, and Systems of On-Skin Electrodes for Electrophysiological Monitoring and Human–Machine Interfaces. Adv. Sci. 2021, 8, 2001938 10.1002/advs.202001938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heng W.; Solomon S.; Gao W. Flexible Electronics and Devices as Human–Machine Interfaces for Medical Robotics. Adv. Mater. 2022, 34, 2107902 10.1002/adma.202107902. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li Z.; Guo W.; Huang Y.; Zhu K.; Yi H.; Wu H. On-Skin Graphene Electrodes for Large Area Electrophysiological Monitoring and Human-Machine Interfaces. Carbon 2020, 164, 164–170. 10.1016/j.carbon.2020.03.058. [DOI] [Google Scholar]
- Qiao Y.; Li X.; Wang J.; Ji S.; Hirtz T.; Tian H.; Jian J.; Cui T.; Dong Y.; Xu X.; et al. Intelligent and Multifunctional Graphene Nanomesh Electronic Skin with High Comfort. Small 2022, 18, 2104810 10.1002/smll.202104810. [DOI] [PubMed] [Google Scholar]
- Won P.; Park J. J.; Lee T.; Ha I.; Han S.; Choi M.; Lee J.; Hong S.; Cho K.-J.; Ko S. H. Stretchable and Transparent Kirigami Conductor of Nanowire Percolation Network for Electronic Skin Applications. Nano Lett. 2019, 19, 6087–6096. 10.1021/acs.nanolett.9b02014. [DOI] [PubMed] [Google Scholar]
- Wang Y.; Yin L.; Bai Y.; Liu S.; Wang L.; Zhou Y.; Hou C.; Yang Z.; Wu H.; Ma J.; et al. Electrically Compensated, Tattoo-like Electrodes for Epidermal Electrophysiology at Scale. Sci. Adv. 2020, 6, eabd0996 10.1126/sciadv.abd0996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeon H.; Lee H.; Kim Y.; Lee D.; Lee Y.; Lee J.-S.; Shin J.; Choi C.; Kang J.-H.; Suh J. M.; et al. Long-Term Reliable Physical Health Monitoring by Sweat Pore–Inspired Perforated Electronic Skins. Sci. Adv. 2021, 7, eabg8459 10.1126/sciadv.abg8459. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lochner C. M.; Khan Y.; Pierre A.; Arias A. C. All-Organic Optoelectronic Sensor for Pulse Oximetry. Nat. Commun. 2014, 5, 5745. 10.1038/ncomms6745. [DOI] [PubMed] [Google Scholar]
- Kaltenbrunner M.; White M. S.; Głowacki E. D.; Sekitani T.; Someya T.; Sariciftci N. S.; Bauer S. Ultrathin and Lightweight Organic Solar Cells with High Flexibility. Nat. Commun. 2012, 3, 770. 10.1038/ncomms1772. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Photosynthetic Bioelectronic Sensors for Touch Perception, UV-Detection, and Nanopower Generation: Toward Self-Powered E-Skins. Adv. Mater. 2018, 30, 1802290. 10.1021/acs.chemrev.4c00049. [DOI] [PubMed] [Google Scholar]
- Jinno H.; Yokota T.; Koizumi M.; Yukita W.; Saito M.; Osaka I.; Fukuda K.; Someya T. Self-Powered Ultraflexible Photonic Skin for Continuous Bio-Signal Detection via Air-Operation-Stable Polymer Light-Emitting Diodes. Nat. Commun. 2021, 12, 2234. 10.1038/s41467-021-22558-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li D.; Zhou J.; Yao K.; Liu S.; He J.; Su J.; Qu Q.; Gao Y.; Song Z.; Yiu C.; et al. Touch IoT Enabled by Wireless Self-Sensing and Haptic-Reproducing Electronic Skin. Sci. Adv. 2022, 8, eade2450 10.1126/sciadv.ade2450. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jung Y.; Choi J.; Yoon Y.; Park H.; Lee J.; Ko S. H. Soft Multi-Modal Thermoelectric Skin for Dual Functionality of Underwater Energy Harvesting and Thermoregulation. Nano Energy 2022, 95, 107002 10.1016/j.nanoen.2022.107002. [DOI] [Google Scholar]
- Park M.; Yoo J.-Y.; Yang T.; Jung Y. H.; Vázquez-Guardado A.; Li S.; Kim J.-H.; Shin J.; Maeng W.-Y.; Lee G.; et al. Skin-Integrated Systems for Power Efficient, Programmable Thermal Sensations across Large Body Areas. Proc. Natl. Acad. Sci. U. S. A. 2023, 120, e2217828120 10.1073/pnas.2217828120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qi J.; Gao F.; Sun G.; Yeo J. C.; Lim C. T. HaptGlove—Untethered Pneumatic Glove for Multimode Haptic Feedback in Reality–Virtuality Continuum. Adv. Sci. 2023, 10, 2301044 10.1002/advs.202301044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang T.-H.; Kim J. R.; Jin H.; Gil H.; Koo J.-H.; Kim H. J. Recent Advances and Opportunities of Active Materials for Haptic Technologies in Virtual and Augmented Reality. Adv. Funct. Mater. 2021, 31, 2008831 10.1002/adfm.202008831. [DOI] [Google Scholar]
- Yu M.; Cheng X.; Peng S.; Cao Y.; Lu Y.; Li B.; Feng X.; Zhang Y.; Wang H.; Jiao Z.; et al. A Self-Sensing Soft Pneumatic Actuator with Closed-Loop Control for Haptic Feedback Wearable Devices. Mater. Des. 2022, 223, 111149 10.1016/j.matdes.2022.111149. [DOI] [Google Scholar]
- Zhu M.; Sun Z.; Lee C. Soft Modular Glove with Multimodal Sensing and Augmented Haptic Feedback Enabled by Materials’ Multifunctionalities. ACS Nano 2022, 16, 14097–14110. 10.1021/acsnano.2c04043. [DOI] [PubMed] [Google Scholar]
- Liu Y.; Yiu C. K.; Zhao Z.; Liu S.; Huang X.; Park W.; Su J.; Zhou J.; Wong T. H.; Yao K.; et al. Skin-Integrated Haptic Interfaces Enabled by Scalable Mechanical Actuators for Virtual Reality. IEEE Internet Things J. 2023, 10, 653–663. 10.1109/JIOT.2022.3203417. [DOI] [Google Scholar]
- Ji X.; Liu X.; Cacucciolo V.; Civet Y.; El Haitami A.; Cantin S.; Perriard Y.; Shea H. Untethered Feel-Through Haptics Using 18-Μm Thick Dielectric Elastomer Actuators. Adv. Funct. Mater. 2021, 31, 2006639 10.1002/adfm.202006639. [DOI] [Google Scholar]
- Qiu W.; Li Z.; Wang G.; Peng Y.; Zhang M.; Wang X.; Zhong J.; Lin L. A Moisture-Resistant Soft Actuator with Low Driving Voltages for Haptic Stimulations in Virtual Games. ACS Appl. Mater. Interfaces 2022, 14, 31257–31266. 10.1021/acsami.2c06209. [DOI] [PubMed] [Google Scholar]
- Chen X.; Gong L.; Wei L.; Yeh S.-C.; Da Xu L.; Zheng L.; Zou Z. A Wearable Hand Rehabilitation System With Soft Gloves. IEEE Trans. Ind. Inform. 2021, 17, 943–952. 10.1109/TII.2020.3010369. [DOI] [Google Scholar]
- Shi Y.; Wang F.; Tian J.; Li S.; Fu E.; Nie J.; Lei R.; Ding Y.; Chen X.; Wang Z. L. Self-Powered Electro-Tactile System for Virtual Tactile Experiences. Sci. Adv. 2021, 7, eabe2943 10.1126/sciadv.abe2943. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yao K.; Zhou J.; Huang Q.; Wu M.; Yiu C. K.; Li J.; Huang X.; Li D.; Su J.; Hou S.; et al. Encoding of Tactile Information in Hand via Skin-Integrated Wireless Haptic Interface. Nat. Mach. Intell. 2022, 4, 893–903. 10.1038/s42256-022-00543-y. [DOI] [Google Scholar]
- Yu X.; Xie Z.; Yu Y.; Lee J.; Vazquez-Guardado A.; Luan H.; Ruban J.; Ning X.; Akhtar A.; Li D.; et al. Skin-Integrated Wireless Haptic Interfaces for Virtual and Augmented Reality. Nature 2019, 575, 473–479. 10.1038/s41586-019-1687-0. [DOI] [PubMed] [Google Scholar]
- Keef C. V.; Kayser L. V.; Tronboll S.; Carpenter C. W.; Root N. B.; Finn M. III; O’Connor T. F.; Abuhamdieh S. N.; Davies D. M.; Runser R.; et al. Virtual Texture Generated Using Elastomeric Conductive Block Copolymer in a Wireless Multimodal Haptic Glove. Adv. Intell. Syst. 2020, 2, 2000018 10.1002/aisy.202000018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shi X.; Zuo Y.; Zhai P.; Shen J.; Yang Y.; Gao Z.; Liao M.; Wu J.; Wang J.; Xu X.; et al. Large-Area Display Textiles Integrated with Functional Systems. Nature 2021, 591, 240–245. 10.1038/s41586-021-03295-8. [DOI] [PubMed] [Google Scholar]
- Zhou Y.; Zhao C.; Wang J.; Li Y.; Li C.; Zhu H.; Feng S.; Cao S.; Kong D. Stretchable High-Permittivity Nanocomposites for Epidermal Alternating-Current Electroluminescent Displays. ACS Mater. Lett. 2019, 1, 511–518. 10.1021/acsmaterialslett.9b00376. [DOI] [Google Scholar]
- Zhang Z.; Wang W.; Jiang Y.; Wang Y.-X.; Wu Y.; Lai J.-C.; Niu S.; Xu C.; Shih C.-C.; Wang C.; et al. High-Brightness All-Polymer Stretchable LED with Charge-Trapping Dilution. Nature 2022, 603, 624–630. 10.1038/s41586-022-04400-1. [DOI] [PubMed] [Google Scholar]
- Choi M.; Bae S.-R.; Hu L.; Hoang A. T.; Kim S. Y.; Ahn J.-H. Full-Color Active-Matrix Organic Light-Emitting Diode Display on Human Skin Based on a Large-Area MoS2 Backplane. Sci. Adv. 2020, 6, eabb5898 10.1126/sciadv.abb5898. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogers J. A.; Someya T.; Huang Y. Materials and Mechanics for Stretchable Electronics. Science 2010, 327, 1603–1607. 10.1126/science.1182383. [DOI] [PubMed] [Google Scholar]
- Wagner S.; Lacour S. P.; Jones J.; Hsu P. I.; Sturm J. C.; Li T.; Suo Z. Electronic Skin: Architecture and Components. Phys. E Low-Dimens. Syst. Nanostructures 2004, 25, 326–334. 10.1016/j.physe.2004.06.032. [DOI] [Google Scholar]
- Kim D.-H.; Song J.; Choi W. M.; Kim H.-S.; Kim R.-H.; Liu Z.; Huang Y. Y.; Hwang K.-C.; Zhang Y. -w.; Rogers J. A. Materials and Noncoplanar Mesh Designs for Integrated Circuits with Linear Elastic Responses to Extreme Mechanical Deformations. Proc. Natl. Acad. Sci. U. S. A. 2008, 105, 18675–18680. 10.1073/pnas.0807476105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang S.; Xu J.; Wang W.; Wang G.-J. N.; Rastak R.; Molina-Lopez F.; Chung J. W.; Niu S.; Feig V. R.; Lopez J.; et al. Skin Electronics from Scalable Fabrication of an Intrinsically Stretchable Transistor Array. Nature 2018, 555, 83–88. 10.1038/nature25494. [DOI] [PubMed] [Google Scholar]
- Marchiori B.; Delattre R.; Hannah S.; Blayac S.; Ramuz M. Laser-Patterned Metallic Interconnections for All Stretchable Organic Electrochemical Transistors. Sci. Rep. 2018, 8, 8477. 10.1038/s41598-018-26731-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu F.; Wu M.-Y.; Safron N. S.; Roy S. S.; Jacobberger R. M.; Bindl D. J.; Seo J.-H.; Chang T.-H.; Ma Z.; Arnold M. S. Highly Stretchable Carbon Nanotube Transistors with Ion Gel Gate Dielectrics. Nano Lett. 2014, 14, 682–686. 10.1021/nl403941a. [DOI] [PubMed] [Google Scholar]
- Xu X.; Zuo Y.; Cai S.; Tao X.; Zhang Z.; Zhou X.; He S.; Fang X.; Peng H. Three-Dimensional Helical Inorganic Thermoelectric Generators and Photodetectors for Stretchable and Wearable Electronic Devices. J. Mater. Chem. C 2018, 6, 4866–4872. 10.1039/C8TC01183D. [DOI] [Google Scholar]
- Sekitani T.; Noguchi Y.; Hata K.; Fukushima T.; Aida T.; Someya T. A Rubberlike Stretchable Active Matrix Using Elastic Conductors. Science 2008, 321, 1468–1472. 10.1126/science.1160309. [DOI] [PubMed] [Google Scholar]
- Matsuhisa N.; Inoue D.; Zalar P.; Jin H.; Matsuba Y.; Itoh A.; Yokota T.; Hashizume D.; Someya T. Printable Elastic Conductors by in Situ Formation of Silver Nanoparticles from Silver Flakes. Nat. Mater. 2017, 16, 834–840. 10.1038/nmat4904. [DOI] [PubMed] [Google Scholar]
- Cao Y.; Tan Y. J.; Li S.; Lee W. W.; Guo H.; Cai Y.; Wang C.; Tee B. C.-K. Self-Healing Electronic Skins for Aquatic Environments. Nat. Electron. 2019, 2, 75–82. 10.1038/s41928-019-0206-5. [DOI] [Google Scholar]
- Ma Z.; Huang Q.; Xu Q.; Zhuang Q.; Zhao X.; Yang Y.; Qiu H.; Yang Z.; Wang C.; Chai Y.; et al. Permeable Superelastic Liquid-Metal Fibre Mat Enables Biocompatible and Monolithic Stretchable Electronics. Nat. Mater. 2021, 20, 859–868. 10.1038/s41563-020-00902-3. [DOI] [PubMed] [Google Scholar]
- Oh J. Y.; Rondeau-Gagné S.; Chiu Y.-C.; Chortos A.; Lissel F.; Wang G.-J. N.; Schroeder B. C.; Kurosawa T.; Lopez J.; Katsumata T.; et al. Intrinsically Stretchable and Healable Semiconducting Polymer for Organic Transistors. Nature 2016, 539, 411–415. 10.1038/nature20102. [DOI] [PubMed] [Google Scholar]
- Xu J.; Wang S.; Wang G.-J. N.; Zhu C.; Luo S.; Jin L.; Gu X.; Chen S.; Feig V. R.; To J. W. F.; et al. Highly Stretchable Polymer Semiconductor Films through the Nanoconfinement Effect. Science 2017, 355, 59–64. 10.1126/science.aah4496. [DOI] [PubMed] [Google Scholar]
- Lill A. T.; Eftaiha A. F.; Huang J.; Yang H.; Seifrid M.; Wang M.; Bazan G. C.; Nguyen T.-Q. High-k Fluoropolymer Gate Dielectric in Electrically Stable Organic Field-Effect Transistors. ACS Appl. Mater. Interfaces 2019, 11, 15821–15828. 10.1021/acsami.8b20827. [DOI] [PubMed] [Google Scholar]
- Tan Y. J.; Godaba H.; Chen G.; Tan S. T. M.; Wan G.; Li G.; Lee P. M.; Cai Y.; Li S.; Shepherd R. F.; et al. A Transparent, Self-Healing and High-κ Dielectric for Low-Field-Emission Stretchable Optoelectronics. Nat. Mater. 2020, 19, 182–188. 10.1038/s41563-019-0548-4. [DOI] [PubMed] [Google Scholar]
- Pu X.; Liu M.; Chen X.; Sun J.; Du C.; Zhang Y.; Zhai J.; Hu W.; Wang Z. L. Ultrastretchable, Transparent Triboelectric Nanogenerator as Electronic Skin for Biomechanical Energy Harvesting and Tactile Sensing. Sci. Adv. 2017, 3, e1700015 10.1126/sciadv.1700015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu T.; Liu M.; Dou S.; Sun J.; Cong Z.; Jiang C.; Du C.; Pu X.; Hu W.; Wang Z. L. Triboelectric-Nanogenerator-Based Soft Energy-Harvesting Skin Enabled by Toughly Bonded Elastomer/Hydrogel Hybrids. ACS Nano 2018, 12, 2818–2826. 10.1021/acsnano.8b00108. [DOI] [PubMed] [Google Scholar]
- Dong K.; Wu Z.; Deng J.; Wang A. C.; Zou H.; Chen C.; Hu D.; Gu B.; Sun B.; Wang Z. L. A Stretchable Yarn Embedded Triboelectric Nanogenerator as Electronic Skin for Biomechanical Energy Harvesting and Multifunctional Pressure Sensing. Adv. Mater. 2018, 30, 1804944 10.1002/adma.201804944. [DOI] [PubMed] [Google Scholar]
- García Núñez C.; Manjakkal L.; Dahiya R. Energy Autonomous Electronic Skin. Npj Flex. Electron. 2019, 3, 1. 10.1038/s41528-018-0045-x. [DOI] [Google Scholar]
- Gong S.; Zhang B.; Zhang J.; Wang Z. L.; Ren K. Biocompatible Poly(Lactic Acid)-Based Hybrid Piezoelectric and Electret Nanogenerator for Electronic Skin Applications. Adv. Funct. Mater. 2020, 30, 1908724 10.1002/adfm.201908724. [DOI] [Google Scholar]
- Zarei M.; Lee G.; Lee S. G.; Cho K. Advances in Biodegradable Electronic Skin: Material Progress and Recent Applications in Sensing, Robotics, and Human–Machine Interfaces. Adv. Mater. 2023, 35, 2203193 10.1002/adma.202203193. [DOI] [PubMed] [Google Scholar]
- Yuan Z.; Shen G. Materials and Device Architecture towards a Multimodal Electronic Skin. Mater. Today 2023, 64, 165–179. 10.1016/j.mattod.2023.02.023. [DOI] [Google Scholar]
- Zhang Q.; Jiang T.; Ho D.; Qin S.; Yang X.; Cho J. H.; Sun Q.; Wang Z. L. Transparent and Self-Powered Multistage Sensation Matrix for Mechanosensation Application. ACS Nano 2018, 12, 254–262. 10.1021/acsnano.7b06126. [DOI] [PubMed] [Google Scholar]
- Wu J.; Fan X.; Liu X.; Ji X.; Shi X.; Wu W.; Yue Z.; Liang J. Highly Sensitive Temperature–Pressure Bimodal Aerogel with Stimulus Discriminability for Human Physiological Monitoring. Nano Lett. 2022, 22, 4459–4467. 10.1021/acs.nanolett.2c01145. [DOI] [PubMed] [Google Scholar]
- Xu J.; Sun X.; Sun B.; Zhu H.; Fan X.; Guo Q.; Li Y.; Zhu Z.; Qian K. Stretchable, Adhesive, and Bioinspired Visual Electronic Skin with Strain/Temperature/Pressure Multimodal Non-Interference Sensing. ACS Appl. Mater. Interfaces 2023, 15, 33774–33783. 10.1021/acsami.3c07857. [DOI] [PubMed] [Google Scholar]
- Jeon S.; Lim S.-C.; Trung T. Q.; Jung M.; Lee N.-E. Flexible Multimodal Sensors for Electronic Skin: Principle, Materials, Device, Array Architecture, and Data Acquisition Method. Proc. IEEE 2019, 107, 2065–2083. 10.1109/JPROC.2019.2930808. [DOI] [Google Scholar]
- Kim S. Y.; Park S.; Park H. W.; Park D. H.; Jeong Y.; Kim D. H. Highly Sensitive and Multimodal All-Carbon Skin Sensors Capable of Simultaneously Detecting Tactile and Biological Stimuli. Adv. Mater. 2015, 27, 4178–4185. 10.1002/adma.201501408. [DOI] [PubMed] [Google Scholar]
- Ho D. H.; Sun Q.; Kim S. Y.; Han J. T.; Kim D. H.; Cho J. H. Stretchable and Multimodal All Graphene Electronic Skin. Adv. Mater. 2016, 28, 2601–2608. 10.1002/adma.201505739. [DOI] [PubMed] [Google Scholar]
- You I.; Mackanic D. G.; Matsuhisa N.; Kang J.; Kwon J.; Beker L.; Mun J.; Suh W.; Kim T. Y.; Tok J. B.-H.; et al. Artificial Multimodal Receptors Based on Ion Relaxation Dynamics. Science 2020, 370, 961–965. 10.1126/science.aba5132. [DOI] [PubMed] [Google Scholar]
- Benight S. J.; Wang C.; Tok J. B. H.; Bao Z. Stretchable and Self-Healing Polymers and Devices for Electronic Skin. Prog. Polym. Sci. 2013, 38, 1961–1977. 10.1016/j.progpolymsci.2013.08.001. [DOI] [Google Scholar]
- Diesendruck C. E.; Sottos N. R.; Moore J. S.; White S. R. Biomimetic Self-Healing. Angew. Chem., Int. Ed. 2015, 54, 10428–10447. 10.1002/anie.201500484. [DOI] [PubMed] [Google Scholar]
- Wang S.; Urban M. W. Self-Healing Polymers. Nat. Rev. Mater. 2020, 5, 562–583. 10.1038/s41578-020-0202-4. [DOI] [Google Scholar]
- Kang J.; Tok J. B.-H.; Bao Z. Self-Healing Soft Electronics. Nat. Electron. 2019, 2, 144–150. 10.1038/s41928-019-0235-0. [DOI] [Google Scholar]
- Khatib M.; Zohar O.; Haick H. Self-Healing Soft Sensors: From Material Design to Implementation. Adv. Mater. 2021, 33, 2004190 10.1002/adma.202004190. [DOI] [PubMed] [Google Scholar]
- Liu R.; Lai Y.; Li S.; Wu F.; Shao J.; Liu D.; Dong X.; Wang J.; Wang Z. L. Ultrathin, Transparent, and Robust Self-Healing Electronic Skins for Tactile and Non-Contact Sensing. Nano Energy 2022, 95, 107056 10.1016/j.nanoen.2022.107056. [DOI] [Google Scholar]
- Hou K.-X.; Zhao S.-P.; Wang D.-P.; Zhao P.-C.; Li C.-H.; Zuo J.-L. A Puncture-Resistant and Self-Healing Conductive Gel for Multifunctional Electronic Skin. Adv. Funct. Mater. 2021, 31, 2107006 10.1002/adfm.202107006. [DOI] [Google Scholar]
- Cooper C. B.; Root S. E.; Michalek L.; Wu S.; Lai J.-C.; Khatib M.; Oyakhire S. T.; Zhao R.; Qin J.; Bao Z. Autonomous Alignment and Healing in Multilayer Soft Electronics Using Immiscible Dynamic Polymers. Science 2023, 380, 935–941. 10.1126/science.adh0619. [DOI] [PubMed] [Google Scholar]
- Yang Y.; Cui T.; Li D.; Ji S.; Chen Z.; Shao W.; Liu H.; Ren T.-L. Breathable Electronic Skins for Daily Physiological Signal Monitoring. Nano-Micro Lett. 2022, 14, 161. 10.1007/s40820-022-00911-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ershad F.; Thukral A.; Yue J.; Comeaux P.; Lu Y.; Shim H.; Sim K.; Kim N.-I.; Rao Z.; Guevara R.; et al. Ultra-Conformal Drawn-on-Skin Electronics for Multifunctional Motion Artifact-Free Sensing and Point-of-Care Treatment. Nat. Commun. 2020, 11, 3823. 10.1038/s41467-020-17619-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaltenbrunner M.; Sekitani T.; Reeder J.; Yokota T.; Kuribara K.; Tokuhara T.; Drack M.; Schwödiauer R.; Graz I.; Bauer-Gogonea S.; et al. An Ultra-Lightweight Design for Imperceptible Plastic Electronics. Nature 2013, 499, 458–463. 10.1038/nature12314. [DOI] [PubMed] [Google Scholar]
- Miyamoto A.; Lee S.; Cooray N. F.; Lee S.; Mori M.; Matsuhisa N.; Jin H.; Yoda L.; Yokota T.; Itoh A.; et al. Inflammation-Free, Gas-Permeable, Lightweight, Stretchable on-Skin Electronics with Nanomeshes. Nat. Nanotechnol. 2017, 12, 907–913. 10.1038/nnano.2017.125. [DOI] [PubMed] [Google Scholar]
- Wang J.; Lee S.; Yokota T.; Jimbo Y.; Wang Y.; Nayeem M. O. G.; Nishinaka M.; Someya T. Nanomesh Organic Electrochemical Transistor for Comfortable On-Skin Electrodes with Local Amplifying Function. ACS Appl. Electron. Mater. 2020, 2, 3601–3609. 10.1021/acsaelm.0c00668. [DOI] [Google Scholar]
- Gwon G.; Choi H.; Bae J.; Zulkifli N. A. B.; Jeong W.; Yoo S.; Hyun D. C.; Lee S. An All-Nanofiber-Based Substrate-Less, Extremely Conformal, and Breathable Organic Field Effect Transistor for Biomedical Applications. Adv. Funct. Mater. 2022, 32, 2204645 10.1002/adfm.202204645. [DOI] [Google Scholar]
- Zhao Y.; Zhang S.; Yu T.; Zhang Y.; Ye G.; Cui H.; He C.; Jiang W.; Zhai Y.; Lu C.; et al. Ultra-Conformal Skin Electrodes with Synergistically Enhanced Conductivity for Long-Time and Low-Motion Artifact Epidermal Electrophysiology. Nat. Commun. 2021, 12, 4880. 10.1038/s41467-021-25152-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y.; Yiu C.; Song Z.; Huang Y.; Yao K.; Wong T.; Zhou J.; Zhao L.; Huang X.; Nejad S. K.; et al. Electronic Skin as Wireless Human-Machine Interfaces for Robotic VR. Sci. Adv. 2022, 8, eabl6700 10.1126/sciadv.abl6700. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kwon Y.-T.; Kim Y.-S.; Kwon S.; Mahmood M.; Lim H.-R.; Park S.-W.; Kang S.-O.; Choi J. J.; Herbert R.; Jang Y. C.; et al. All-Printed Nanomembrane Wireless Bioelectronics Using a Biocompatible Solderable Graphene for Multimodal Human-Machine Interfaces. Nat. Commun. 2020, 11, 3450. 10.1038/s41467-020-17288-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen M.; Ouyang J.; Jian A.; Liu J.; Li P.; Hao Y.; Gong Y.; Hu J.; Zhou J.; Wang R.; et al. Imperceptible, Designable, and Scalable Braided Electronic Cord. Nat. Commun. 2022, 13, 7097. 10.1038/s41467-022-34918-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y.; Shen Y.; Ding W.; Zhang X.; Tian W.; Yang S.; Hui B.; Zhang K. All-Natural Phyllosilicate-Polysaccharide Triboelectric Sensor for Machine Learning-Assisted Human Motion Prediction. Npj Flex. Electron. 2023, 7, 21. 10.1038/s41528-023-00254-3. [DOI] [Google Scholar]
- Wei X.; Li H.; Yue W.; Gao S.; Chen Z.; Li Y.; Shen G. A High-Accuracy, Real-Time, Intelligent Material Perception System with a Machine-Learning-Motivated Pressure-Sensitive Electronic Skin. Matter 2022, 5, 1481–1501. 10.1016/j.matt.2022.02.016. [DOI] [Google Scholar]
- Bai N.; Wang L.; Xue Y.; Wang Y.; Hou X.; Li G.; Zhang Y.; Cai M.; Zhao L.; Guan F.; et al. Graded Interlocks for Iontronic Pressure Sensors with High Sensitivity and High Linearity over a Broad Range. ACS Nano 2022, 16, 4338–4347. 10.1021/acsnano.1c10535. [DOI] [PubMed] [Google Scholar]
- Xu J.; Li X.; Chang H.; Zhao B.; Tan X.; Yang Y.; Tian H.; Zhang S.; Ren T.-L. Electrooculography and Tactile Perception Collaborative Interface for 3D Human–Machine Interaction. ACS Nano 2022, 16, 6687–6699. 10.1021/acsnano.2c01310. [DOI] [PubMed] [Google Scholar]
- Tan P.; Han X.; Zou Y.; Qu X.; Xue J.; Li T.; Wang Y.; Luo R.; Cui X.; Xi Y.; et al. Self-Powered Gesture Recognition Wristband Enabled by Machine Learning for Full Keyboard and Multicommand Input. Adv. Mater. 2022, 34, 2200793 10.1002/adma.202200793. [DOI] [PubMed] [Google Scholar]
- Shu S.; Wang Z.; Chen P.; Zhong J.; Tang W.; Wang Z. L. Machine-Learning Assisted Electronic Skins Capable of Proprioception and Exteroception in Soft Robotics. Adv. Mater. 2023, 35, 2211385 10.1002/adma.202211385. [DOI] [PubMed] [Google Scholar]
- Bhatta T.; Sharma S.; Shrestha K.; Shin Y.; Seonu S.; Lee S.; Kim D.; Sharifuzzaman Md; Rana S. S.; Park J. Y. Siloxene/PVDF Composite Nanofibrous Membrane for High-Performance Triboelectric Nanogenerator and Self-Powered Static and Dynamic Pressure Sensing Applications. Adv. Funct. Mater. 2022, 32, 2202145 10.1002/adfm.202202145. [DOI] [Google Scholar]
- Zhang Y.; Tao T. H. Skin-Friendly Electronics for Acquiring Human Physiological Signatures. Adv. Mater. 2019, 31, 1905767 10.1002/adma.201905767. [DOI] [PubMed] [Google Scholar]
- Sun Z.; Zhu M.; Shan X.; Lee C. Augmented Tactile-Perception and Haptic-Feedback Rings as Human-Machine Interfaces Aiming for Immersive Interactions. Nat. Commun. 2022, 13, 5224. 10.1038/s41467-022-32745-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luo Y.; Xiao X.; Chen J.; Li Q.; Fu H. Machine-Learning-Assisted Recognition on Bioinspired Soft Sensor Arrays. ACS Nano 2022, 16, 6734–6743. 10.1021/acsnano.2c01548. [DOI] [PubMed] [Google Scholar]
- Wu X.; Luo X.; Song Z.; Bai Y.; Zhang B.; Zhang G. Ultra-Robust and Sensitive Flexible Strain Sensor for Real-Time and Wearable Sign Language Translation. Adv. Funct. Mater. 2023, 33, 2303504 10.1002/adfm.202303504. [DOI] [Google Scholar]
- Ramírez J.; Rodriquez D.; Qiao F.; Warchall J.; Rye J.; Aklile E.; S.-C. Chiang A.; Marin B. C.; Mercier P. P.; Cheng C. K.; et al. Metallic Nanoislands on Graphene for Monitoring Swallowing Activity in Head and Neck Cancer Patients. ACS Nano 2018, 12, 5913–5922. 10.1021/acsnano.8b02133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin S.; Hu S.; Song W.; Gu M.; Liu J.; Song J.; Liu Z.; Li Z.; Huang K.; Wu Y.; et al. An Ultralight, Flexible, and Biocompatible All-Fiber Motion Sensor for Artificial Intelligence Wearable Electronics. Npj Flex. Electron. 2022, 6, 27. 10.1038/s41528-022-00158-8. [DOI] [Google Scholar]
- Yang X.; Zhang M.; Gong S.; Sun M.; Xie M.; Niu P.; Pang W. Chest-Laminated and Sweat-Permeable E-Skin for Speech and Motion Artifact-Insensitive Cough Detection. Adv. Mater. Technol. 2023, 8, 2201043 10.1002/admt.202201043. [DOI] [Google Scholar]
- Chen X.; Zhang D.; Luan H.; Yang C.; Yan W.; Liu W. Flexible Pressure Sensors Based on Molybdenum Disulfide/Hydroxyethyl Cellulose/Polyurethane Sponge for Motion Detection and Speech Recognition Using Machine Learning. ACS Appl. Mater. Interfaces 2023, 15, 2043–2053. 10.1021/acsami.2c16730. [DOI] [PubMed] [Google Scholar]
- Li T.; Su Y.; Zheng H.; Chen F.; Li X.; Tan Y.; Zhou Z. An Artificial Intelligence-Motivated Skin-Like Optical Fiber Tactile Sensor. Adv. Intell. Syst. 2023, 5, 2200460 10.1002/aisy.202200460. [DOI] [Google Scholar]
- Suo J.; Liu Y.; Wu C.; Chen M.; Huang Q.; Liu Y.; Yao K.; Chen Y.; Pan Q.; Chang X.; et al. Wide-Bandwidth Nanocomposite-Sensor Integrated Smart Mask for Tracking Multiphase Respiratory Activities. Adv. Sci. 2022, 9, 2203565 10.1002/advs.202203565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang X.-Y.; Liu H.; Ma X.-Y.; Wang Z.-C.; Li G.-P.; Han L.; Sun K.; Yang Q.-S.; Ji S.-R.; Yu D.-L.; et al. Deep Learning Enabled High-Performance Speech Command Recognition on Graphene Flexible Microphones. ACS Appl. Electron. Mater. 2022, 4, 2306–2312. 10.1021/acsaelm.2c00125. [DOI] [Google Scholar]
- Sundaram S.; Kellnhofer P.; Li Y.; Zhu J.-Y.; Torralba A.; Matusik W. Learning the Signatures of the Human Grasp Using a Scalable Tactile Glove. Nature 2019, 569, 698–702. 10.1038/s41586-019-1234-z. [DOI] [PubMed] [Google Scholar]
- Yang C.; Wang H.; Yang J.; Yao H.; He T.; Bai J.; Guang T.; Cheng H.; Yan J.; Qu L. A Machine-Learning-Enhanced Simultaneous and Multimodal Sensor Based on Moist-Electric Powered Graphene Oxide. Adv. Mater. 2022, 34, 2205249. 10.1002/adma.202205249. [DOI] [PubMed] [Google Scholar]
- Lee G.; Son J. H.; Lee S.; Kim S. W.; Kim D.; Nguyen N. N.; Lee S. G.; Cho K. Fingerpad-Inspired Multimodal Electronic Skin for Material Discrimination and Texture Recognition. Adv. Sci. 2021, 8, 2002606 10.1002/advs.202002606. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Platt J. others. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. Adv. Large Margin Classif. 2000, 10, 61–74. 10.7551/mitpress/1113.003.0008. [DOI] [Google Scholar]
- Zhang Z. Introduction to Machine Learning: K-Nearest Neighbors. Ann. Transl. Med. 2016, 4, 218. 10.21037/atm.2016.03.37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Balakrishnama S.; Ganapathiraju A. Linear Discriminant Analysis-a Brief Tutorial. Inst. Signal Inf. Process. 1998, 18, 1–8. [Google Scholar]
- Kohavi R.; Quinlan J. R.. Data Mining Tasks and Methods: Classification: Decision-Tree Discovery. Handbook of data mining and knowledge discovery; 2002; pp 267–276. [Google Scholar]
- Glorot X.; Bengio Y.. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics; JMLR Workshop and Conference Proceedings, 2010; pp 249–256.
- Xu B.; Chen D.; Wang Y.; Tang R.; Yang L.; Feng H.; Liu Y.; Wang Z.; Wang F.; Zhang T. Wireless and Flexible Tactile Sensing Array Based on an Adjustable Resonator with Machine-Learning Perception. Adv. Electron. Mater. 2023, 9, 2201334 10.1002/aelm.202201334. [DOI] [Google Scholar]
- Kim T.; Shin Y.; Kang K.; Kim K.; Kim G.; Byeon Y.; Kim H.; Gao Y.; Lee J. R.; Son G.; et al. Ultrathin Crystalline-Silicon-Based Strain Gauges with Deep Learning Algorithms for Silent Speech Interfaces. Nat. Commun. 2022, 13, 5815. 10.1038/s41467-022-33457-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cui Z.; Wang W.; Xia H.; Wang C.; Tu J.; Ji S.; Tan J. M. R.; Liu Z.; Zhang F.; Li W.; et al. Freestanding and Scalable Force-Softness Bimodal Sensor Arrays for Haptic Body-Feature Identification. Adv. Mater. 2022, 34, 2207016 10.1002/adma.202207016. [DOI] [PubMed] [Google Scholar]
- Heng W.; Pang G.; Xu F.; Huang X.; Pang Z.; Yang G. Flexible Insole Sensors with Stably Connected Electrodes for Gait Phase Detection. Sensors 2019, 19, 5197. 10.3390/s19235197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Q.; Wang Y.; Wang T.; Li D.; Xie J.; Torun H.; Fu Y. Piezoelectric Smart Patch Operated with Machine-Learning Algorithms for Effective Detection and Elimination of Condensation. ACS Sens. 2021, 6, 3072–3081. 10.1021/acssensors.1c01187. [DOI] [PubMed] [Google Scholar]
- Hou B.; Yi L.; Li C.; Zhao H.; Zhang R.; Zhou B.; Liu X. An Interactive Mouthguard Based on Mechanoluminescence-Powered Optical Fibre Sensors for Bite-Controlled Device Operation. Nat. Electron. 2022, 5, 682–693. 10.1038/s41928-022-00841-8. [DOI] [Google Scholar]
- Liu M.; Zhang Y.; Zhang Y.; Zhou Z.; Qin N.; Tao T. H. Robotic Manipulation under Harsh Conditions Using Self-Healing Silk-Based Iontronics. Adv. Sci. 2022, 9, 2102596 10.1002/advs.202102596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oh S.; Cho J.-I.; Lee B. H.; Seo S.; Lee J.-H.; Choo H.; Heo K.; Lee S. Y.; Park J.-H. Flexible Artificial Si-In-Zn-O/Ion Gel Synapse and Its Application to Sensory-Neuromorphic System for Sign Language Translation. Sci. Adv. 2021, 7, eabg9450 10.1126/sciadv.abg9450. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu M.; Zhang Y.; Wang J.; Qin N.; Yang H.; Sun K.; Hao J.; Shu L.; Liu J.; Chen Q.; et al. A Star-Nose-like Tactile-Olfactory Bionic Sensing Array for Robust Object Recognition in Non-Visual Environments. Nat. Commun. 2022, 13, 79. 10.1038/s41467-021-27672-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Niu H.; Li H.; Gao S.; Li Y.; Wei X.; Chen Y.; Yue W.; Zhou W.; Shen G. Perception-to-Cognition Tactile Sensing Based on Artificial-Intelligence-Motivated Human Full-Skin Bionic Electronic Skin. Adv. Mater. 2022, 34, 2202622 10.1002/adma.202202622. [DOI] [PubMed] [Google Scholar]
- Yan Y.; Hu Z.; Yang Z.; Yuan W.; Song C.; Pan J.; Shen Y. Soft Magnetic Skin for Super-Resolution Tactile Sensing with Force Self-Decoupling. Sci. Robot. 2021, 6, eabc8801 10.1126/scirobotics.abc8801. [DOI] [PubMed] [Google Scholar]
- Lee J. H.; Kim S. H.; Heo J. S.; Kwak J. Y.; Park C. W.; Kim I.; Lee M.; Park H.-H.; Kim Y.-H.; Lee S. J.; et al. Heterogeneous Structure Omnidirectional Strain Sensor Arrays With Cognitively Learned Neural Networks. Adv. Mater. 2023, 35, 2208184 10.1002/adma.202208184. [DOI] [PubMed] [Google Scholar]
- Li Y.; Zhao M.; Yan Y.; He L.; Wang Y.; Xiong Z.; Wang S.; Bai Y.; Sun F.; Lu Q.; et al. Multifunctional Biomimetic Tactile System via a Stick-Slip Sensing Strategy for Human–Machine Interactions. Npj Flex. Electron. 2022, 6, 46. 10.1038/s41528-022-00183-7. [DOI] [Google Scholar]
- Oren O.; Gersh B. J.; Bhatt D. L. Artificial Intelligence in Medical Imaging: Switching from Radiographic Pathological Data to Clinically Meaningful Endpoints. Lancet Digit. Health 2020, 2, e486–e488. 10.1016/S2589-7500(20)30160-6. [DOI] [PubMed] [Google Scholar]
- Gu J.; Wang Z.; Kuen J.; Ma L.; Shahroudy A.; Shuai B.; Liu T.; Wang X.; Wang G.; Cai J.; et al. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. 10.1016/j.patcog.2017.10.013. [DOI] [Google Scholar]
- O’Shea K.; Nash R. An Introduction to Convolutional Neural Networks. ArXiv 2015, 1511.08458. 10.48550/arXiv.1511.08458. [DOI] [Google Scholar]
- Sherstinsky A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. Nonlinear Phenom. 2020, 404, 132306. 10.1016/j.physd.2019.132306. [DOI] [Google Scholar]
- Canziani A.; Paszke A.; Culurciello E. An Analysis of Deep Neural Network Models for Practical Applications. ArXiv 2016, 1605.07678. 10.48550/arXiv.1605.07678. [DOI] [Google Scholar]
- Montavon G.; Samek W.; Müller K.-R. Methods for Interpreting and Understanding Deep Neural Networks. Digit. Signal Process. 2018, 73, 1–15. 10.1016/j.dsp.2017.10.011. [DOI] [Google Scholar]
- Bau D.; Zhu J.-Y.; Strobelt H.; Lapedriza A.; Zhou B.; Torralba A. Understanding the Role of Individual Units in a Deep Neural Network. Proc. Natl. Acad. Sci. U. S. A. 2020, 117, 30071–30078. 10.1073/pnas.1907375117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kiranyaz S.; Ince T.; Abdeljaber O.; Avci O.; Gabbouj M.. 1-D Convolutional Neural Networks for Signal Processing Applications. In ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); IEEE, 2019; pp 8360–8364.
- Tan M.; Le Q.. Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks. In International Conference on Machine Learning; PMLR, 2019; pp 6105–6114.
- Tran D.; Bourdev L.; Fergus R.; Torresani L.; Paluri M.. Learning Spatiotemporal Features with 3d Convolutional Networks. In Proceedings of the IEEE international conference on computer vision; 2015; pp 4489–4497.
- Ji S.; Xu W.; Yang M.; Yu K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. 10.1109/TPAMI.2012.59. [DOI] [PubMed] [Google Scholar]
- Albawi S.; Mohammed T. A.; Al-Zawi S.. Understanding of a Convolutional Neural Network. In 2017 international conference on engineering and technology (ICET); IEEE, 2017; pp 1–6.
- Chen Y.; Fan H.; Xu B.; Yan Z.; Kalantidis Y.; Rohrbach M.; Yan S.; Feng J.. Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. In Proceedings of the IEEE/CVF international conference on computer vision; 2019; pp 3435–3444.
- Luo Y.; Abidian M. R.; Ahn J.-H.; Akinwande D.; Andrews A. M.; Antonietti M.; Bao Z.; Berggren M.; Berkey C. A.; Bettinger C. J.; et al. Technology Roadmap for Flexible Sensors. ACS Nano 2023, 17, 5211–5295. 10.1021/acsnano.2c12606. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang Q.; Jin W.; Zhang Q.; Wei Y.; Guo Z.; Li X.; Yang Y.; Luo Q.; Tian H.; Ren T.-L. Mixed-Modality Speech Recognition and Interaction Using a Wearable Artificial Throat. Nat. Mach. Intell. 2023, 5, 169–180. 10.1038/s42256-023-00616-6. [DOI] [Google Scholar]
- Jung Y. H.; Pham T. X.; Issa D.; Wang H. S.; Lee J. H.; Chung M.; Lee B.-Y.; Kim G.; Yoo C. D.; Lee K. J. Deep Learning-Based Noise Robust Flexible Piezoelectric Acoustic Sensors for Speech Processing. Nano Energy 2022, 101, 107610 10.1016/j.nanoen.2022.107610. [DOI] [Google Scholar]
- Wen F.; Zhang Z.; He T.; Lee C. AI Enabled Sign Language Recognition and VR Space Bidirectional Communication Using Triboelectric Smart Glove. Nat. Commun. 2021, 12, 5378. 10.1038/s41467-021-25637-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu X.; Faes L.; Kale A. U.; Wagner S. K.; Fu D. J.; Bruynseels A.; Mahendiran T.; Moraes G.; Shamdas M.; Kern C.; et al. A Comparison of Deep Learning Performance against Health-Care Professionals in Detecting Diseases from Medical Imaging: A Systematic Review and Meta-Analysis. Lancet Digit. Health 2019, 1, e271–e297. 10.1016/S2589-7500(19)30123-2. [DOI] [PubMed] [Google Scholar]
- Yosinski J.; Clune J.; Bengio Y.; Lipson H.. How Transferable Are Features in Deep Neural Networks? Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Pascanu R.; Mikolov T.; Bengio Y.. On the Difficulty of Training Recurrent Neural Networks. In International conference on machine learning; PMLR, 2013; pp 1310–1318.
- Yu Y.; Si X.; Hu C.; Zhang J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. 10.1162/neco_a_01199. [DOI] [PubMed] [Google Scholar]
- Graves A.; Mohamed A.; Hinton G.. Speech Recognition with Deep Recurrent Neural Networks. In 2013 IEEE international conference on acoustics, speech and signal processing; IEEE, 2013; pp 6645–6649.
- Lipton Z. C.; Berkowitz J.; Elkan C. A Critical Review of Recurrent Neural Networks for Sequence Learning. ArXiv 2015, 1506.00019. 10.48550/arXiv.1506.00019. [DOI] [Google Scholar]
- Chung J.; Gulcehre C.; Cho K.; Bengio Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. ArXiv 2014, 1412.3555. 10.48550/arXiv.1412.3555. [DOI] [Google Scholar]
- Lipton Z. C.; Kale D. C.; Elkan C.; Wetzel R. Learning to Diagnose with LSTM Recurrent Neural Networks. ArXiv 2015, 1511.03677. 10.48550/arXiv.1511.03677. [DOI] [Google Scholar]
- Gers F. A.; Schmidhuber J.; Cummins F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. 10.1162/089976600300015015. [DOI] [PubMed] [Google Scholar]
- Lu Y.; Tian H.; Cheng J.; Zhu F.; Liu B.; Wei S.; Ji L.; Wang Z. L. Decoding Lip Language Using Triboelectric Sensors with Deep Learning. Nat. Commun. 2022, 13, 1401. 10.1038/s41467-022-29083-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Y.; Mao J.; Zheng R.-K.; Zhang J.; Wu Y.; Wang X.; Miao K.; Yao H.; Yang L.; Zheng H. Ferroelectric Polarization-Enhanced Performance of Flexible CuInP2S6 Piezoelectric Nanogenerator for Biomechanical Energy Harvesting and Voice Recognition Applications. Adv. Funct. Mater. 2023, 33, 2214745 10.1002/adfm.202214745. [DOI] [Google Scholar]
- Yuan R.; Tiw P. J.; Cai L.; Yang Z.; Liu C.; Zhang T.; Ge C.; Huang R.; Yang Y. A Neuromorphic Physiological Signal Processing System Based on VO2Memristor for Next-Generation Human-Machine Interface. Nat. Commun. 2023, 14, 3695. 10.1038/s41467-023-39430-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ge C.; An X.; He X.; Duan Z.; Chen J.; Hu P.; Zhao J.; Wang Z.; Zhang J. Integrated Multifunctional Electronic Skins with Low-Coupling for Complicated and Accurate Human–Robot Collaboration. Adv. Sci. 2023, 10, 2301341 10.1002/advs.202301341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dai C.; Ye C.; Ren J.; Yang S.; Cao L.; Yu H.; Liu S.; Shao Z.; Li J.; Chen W.; et al. Humanoid Ionotronic Skin for Smart Object Recognition and Sorting. ACS Mater. Lett. 2023, 5, 189–201. 10.1021/acsmaterialslett.2c00783. [DOI] [Google Scholar]
- Ye C.; Yang S.; Ren J.; Dong S.; Cao L.; Pei Y.; Ling S. Electroassisted Core-Spun Triboelectric Nanogenerator Fabrics for IntelliSense and Artificial Intelligence Perception. ACS Nano 2022, 16, 4415–4425. 10.1021/acsnano.1c10680. [DOI] [PubMed] [Google Scholar]
- Cao L.; Ye C.; Zhang H.; Yang S.; Shan Y.; Lv Z.; Ren J.; Ling S. An Artificial Motion and Tactile Receptor Constructed by Hyperelastic Double Physically Cross-Linked Silk Fibroin Ionoelastomer. Adv. Funct. Mater. 2023, 33, 2301404 10.1002/adfm.202301404. [DOI] [Google Scholar]
- Yun G.; Cole T.; Zhang Y.; Zheng J.; Sun S.; Ou-Yang Y.; Shu J.; Lu H.; Zhang Q.; Wang Y.; et al. Electro-Mechano Responsive Elastomers with Self-Tunable Conductivity and Stiffness. Sci. Adv. 2023, 9, eadf1141 10.1126/sciadv.adf1141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roy K.; Jaiswal A.; Panda P. Towards Spike-Based Machine Intelligence with Neuromorphic Computing. Nature 2019, 575, 607–617. 10.1038/s41586-019-1677-2. [DOI] [PubMed] [Google Scholar]
- Lee J. H.; Delbruck T.; Pfeiffer M. Training Deep Spiking Neural Networks Using Backpropagation. Front. Neurosci. 2016, 10, 508. 10.3389/fnins.2016.00508. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eshraghian J. K.; Ward M.; Neftci E. O.; Wang X.; Lenz G.; Dwivedi G.; Bennamoun M.; Jeong D. S.; Lu W. D. Training Spiking Neural Networks Using Lessons from Deep Learning. Proc. IEEE 2023, 111, 1016. 10.1109/JPROC.2023.3308088. [DOI] [Google Scholar]
- Lee J.; Kim S.; Park S.; Lee J.; Hwang W.; Cho S. W.; Lee K.; Kim S. M.; Seong T.-Y.; Park C.; et al. An Artificial Tactile Neuron Enabling Spiking Representation of Stiffness and Disease Diagnosis. Adv. Mater. 2022, 34, 2201608 10.1002/adma.202201608. [DOI] [PubMed] [Google Scholar]
- Vaswani A.; Shazeer N.; Parmar N.; Uszkoreit J.; Jones L.; Gomez A. N.; Kaiser L.; Polosukhin I.. Attention Is All You Need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Huang C.-Z. A.; Vaswani A.; Uszkoreit J.; Shazeer N.; Simon I.; Hawthorne C.; Dai A. M.; Hoffman M. D.; Dinculescu M.; Eck D. Music Transformer. ArXiv 2018, 1809.04281. 10.48550/arXiv.1809.04281. [DOI] [Google Scholar]
- He K.; Chen X.; Xie S.; Li Y.; Dollár P.; Girshick R.. Masked Autoencoders Are Scalable Vision Learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2022; pp 16000–16009.
- Liu Z.; Ning J.; Cao Y.; Wei Y.; Zhang Z.; Lin S.; Hu H.. Video Swin Transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2022; pp 3202–3211.
- Jumper J.; Evans R.; Pritzel A.; Green T.; Figurnov M.; Ronneberger O.; Tunyasuvunakool K.; Bates R.; Žídek A.; Potapenko A.; et al. Highly Accurate Protein Structure Prediction with AlphaFold. Nature 2021, 596, 583–589. 10.1038/s41586-021-03819-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karita S.; Chen N.; Hayashi T.; Hori T.; Inaguma H.; Jiang Z.; Someki M.; Soplin N. E. Y.; Yamamoto R.; Wang X.; et al. A Comparative Study on Transformer vs Rnn in Speech Applications. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU); IEEE, 2019; pp 449–456. [Google Scholar]
- Beltagy I.; Peters M. E.; Cohan A. Longformer: The Long-Document Transformer. ArXiv 2020, 2004.05150. 10.48550/arXiv.2004.05150. [DOI] [Google Scholar]
- Sutskever I.; Vinyals O.; Le Q. V.. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc., 2014; Vol. 27. [Google Scholar]
- Devlin J.; Chang M.-W.; Lee K.; Toutanova K. Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding. ArXiv 2018, 1810.04805. 10.48550/arXiv.1810.04805. [DOI] [Google Scholar]
- Lewis M.; Liu Y.; Goyal N.; Ghazvininejad M.; Mohamed A.; Levy O.; Stoyanov V.; Zettlemoyer L. Bart: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. ArXiv 2019, 1910.13461. 10.48550/arXiv.1910.13461. [DOI] [Google Scholar]
- Brown T.; Mann B.; Ryder N.; Subbiah M.; Kaplan J. D.; Dhariwal P.; Neelakantan A.; Shyam P.; Sastry G.; Askell A.; et al. Language Models Are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Liu Y.; Han T.; Ma S.; Zhang J.; Yang Y.; Tian J.; He H.; Li A.; He M.; Liu Z. Summary of Chatgpt/Gpt-4 Research and Perspective towards the Future of Large Language Models. ArXiv 2023, 2304.01852. 10.48550/arXiv.2304.01852. [DOI] [Google Scholar]
- Huang R.; Li M.; Yang D.; Shi J.; Chang X.; Ye Z.; Wu Y.; Hong Z.; Huang J.; Liu J. Audiogpt: Understanding and Generating Speech, Music, Sound, and Talking Head. ArXiv 2023, 2304.12995. 10.48550/arXiv.2304.12995. [DOI] [Google Scholar]
- Zhao Z.; Tang J.; Yuan J.; Li Y.; Dai Y.; Yao J.; Zhang Q.; Ding S.; Li T.; Zhang R.; et al. Large-Scale Integrated Flexible Tactile Sensor Array for Sensitive Smart Robotic Touch. ACS Nano 2022, 16, 16784–16795. 10.1021/acsnano.2c06432. [DOI] [PubMed] [Google Scholar]
- Kinnamon S. C. Taste Receptor Signalling – from Tongues to Lungs. Acta Physiol. 2012, 204, 158–168. 10.1111/j.1748-1716.2011.02308.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Úlehlová L.; Voldřich L.; Janisch R. Correlative Study of Sensory Cell Density and Cochlear Length in Humans. Hear. Res. 1987, 28, 149–151. 10.1016/0378-5955(87)90045-1. [DOI] [PubMed] [Google Scholar]
- Johansson R. S.; Vallbo A. B. Tactile Sensibility in the Human Hand: Relative and Absolute Densities of Four Types of Mechanoreceptive Units in Glabrous Skin. J. Physiol. 1979, 286, 283–300. 10.1113/jphysiol.1979.sp012619. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharma A.; Kumar R.; Aier I.; Semwal R.; Tyagi P.; Varadwaj P. Sense of Smell: Structural, Functional, Mechanistic Advancements and Challenges in Human Olfactory Research. Curr. Neuropharmacol. 2019, 17, 891–911. 10.2174/1570159X17666181206095626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jonas J. B.; Schneider U.; Naumann G. O. H. Count and Density of Human Retinal Photoreceptors. Graefes Arch. Clin. Exp. Ophthalmol. 1992, 230, 505–510. 10.1007/BF00181769. [DOI] [PubMed] [Google Scholar]
- Pakkenberg B.; Pelvig D.; Marner L.; Bundgaard M. J.; Gundersen H. J. G.; Nyengaard J. R.; Regeur L. Aging and the Human Neocortex. Exp. Gerontol. 2003, 38, 95–99. 10.1016/S0531-5565(02)00151-1. [DOI] [PubMed] [Google Scholar]
- Llama 2 - Meta AI. https://ai.meta.com/llama/ (accessed 2023-12-14).
- Ye J.; Chen X.; Xu N.; Zu C.; Shao Z.; Liu S.; Cui Y.; Zhou Z.; Gong C.; Shen Y. A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models. arXiv 2023, 2303.10420. 10.48550/arXiv.2303.10420. [DOI] [Google Scholar]
- He K.; Zhang X.; Ren S.; Sun J.. Deep Residual Learning for Image Recognition; Las Vegas, USA, 2016; pp 770–778. [Google Scholar]
- Wang Z.; Sun Z.; Yin H.; Liu X.; Wang J.; Zhao H.; Pang C. H.; Wu T.; Li S.; Yin Z.; et al. Data-Driven Materials Innovation and Applications. Adv. Mater. 2022, 34, 2104113 10.1002/adma.202104113. [DOI] [PubMed] [Google Scholar]
- Li J.; Lim K.; Yang H.; Ren Z.; Raghavan S.; Chen P.-Y.; Buonassisi T.; Wang X. AI Applications through the Whole Life Cycle of Material Discovery. Matter 2020, 3, 393–432. 10.1016/j.matt.2020.06.011. [DOI] [Google Scholar]
- Menon D.; Ranganathan R. A Generative Approach to Materials Discovery, Design, and Optimization. ACS Omega 2022, 7, 25958–25973. 10.1021/acsomega.2c03264. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pankajakshan P.; Sanyal S.; De Noord O. E.; Bhattacharya I.; Bhattacharyya A.; Waghmare U. Machine Learning and Statistical Analysis for Materials Science: Stability and Transferability of Fingerprint Descriptors and Chemical Insights. Chem. Mater. 2017, 29, 4190–4201. 10.1021/acs.chemmater.6b04229. [DOI] [Google Scholar]
- Wigh D. S.; Goodman J. M.; Lapkin A. A. A Review of Molecular Representation in the Age of Machine Learning. WIREs Comput. Mol. Sci. 2022, 12, e1603 10.1002/wcms.1603. [DOI] [Google Scholar]
- Batra R.; Song L.; Ramprasad R. Emerging Materials Intelligence Ecosystems Propelled by Machine Learning. Nat. Rev. Mater. 2021, 6, 655–678. 10.1038/s41578-020-00255-y. [DOI] [Google Scholar]
- Garg R.; Patra N. R.; Samal S.; Babbar S.; Parida K. A Review on Accelerated Development of Skin-like MXene Electrodes: From Experimental to Machine Learning. Nanoscale 2023, 15, 8110–8133. 10.1039/D2NR05969J. [DOI] [PubMed] [Google Scholar]
- Zhao S.; Zhao Y.; Li C.; Wang W.; Liu H.; Cui L.; Li X.; Yang Z.; Zhang A.; Wang Y.; et al. Aramid Nanodielectrics for Ultraconformal Transparent Electronic Skins. Adv. Mater. 2024, 36, 2305479 10.1002/adma.202305479. [DOI] [PubMed] [Google Scholar]
- Wu S.; Kondo Y.; Kakimoto M.; Yang B.; Yamada H.; Kuwajima I.; Lambard G.; Hongo K.; Xu Y.; Shiomi J.; et al. Machine-Learning-Assisted Discovery of Polymers with High Thermal Conductivity Using a Molecular Design Algorithm. Npj Comput. Mater. 2019, 5, 66. 10.1038/s41524-019-0203-2. [DOI] [Google Scholar]
- Chen T.; Pang Z.; He S.; Li Y.; Shrestha S.; Little J. M.; Yang H.; Chung T.-C.; Sun J.; Whitley H. C. Machine Intelligence-Accelerated Discovery of All-Natural Plastic Substitutes. Nat. Nanotechnol. 2024, 19, 782. 10.1038/s41565-024-01635-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang X.; Zeng Z.; Fan Z.; Liu J.; Zhang H. Graphene-Based Electrodes. Adv. Mater. 2012, 24, 5979–6004. 10.1002/adma.201201587. [DOI] [PubMed] [Google Scholar]
- Cheng T.; Zhang Y.-Z.; Zhang J.-D.; Lai W.-Y.; Huang W. High-Performance Free-Standing PEDOT:PSS Electrodes for Flexible and Transparent All-Solid-State Supercapacitors. J. Mater. Chem. A 2016, 4, 10493–10499. 10.1039/C6TA03537J. [DOI] [Google Scholar]
- Dickey M. D. Stretchable and Soft Electronics Using Liquid Metals. Adv. Mater. 2017, 29, 1606425 10.1002/adma.201606425. [DOI] [PubMed] [Google Scholar]
- Tee B. C.-K.; Wang C.; Allen R.; Bao Z. An Electrically and Mechanically Self-Healing Composite with Pressure- and Flexion-Sensitive Properties for Electronic Skin Applications. Nat. Nanotechnol. 2012, 7, 825–832. 10.1038/nnano.2012.192. [DOI] [PubMed] [Google Scholar]
- Cheng Z.; Feng W.; Zhang Y.; Sun L.; Liu Y.; Chen L.; Wang C. A Highly Robust Amphibious Soft Robot with Imperceptibility Based on a Water-Stable and Self-Healing Ionic Conductor. Adv. Mater. 2023, 35, 2301005 10.1002/adma.202301005. [DOI] [PubMed] [Google Scholar]
- Wu H.; Kong D.; Ruan Z.; Hsu P.-C.; Wang S.; Yu Z.; Carney T. J.; Hu L.; Fan S.; Cui Y. A Transparent Electrode Based on a Metal Nanotrough Network. Nat. Nanotechnol. 2013, 8, 421–425. 10.1038/nnano.2013.84. [DOI] [PubMed] [Google Scholar]
- Lee S.; Song Y.; Ko Y.; Ko Y.; Ko J.; Kwon C. H.; Huh J.; Kim S.; Yeom B.; Cho J. A Metal-Like Conductive Elastomer with a Hierarchical Wrinkled Structure. Adv. Mater. 2020, 32, 1906460 10.1002/adma.201906460. [DOI] [PubMed] [Google Scholar]
- Yan Z.; Pan T.; Wang D.; Li J.; Jin L.; Huang L.; Jiang J.; Qi Z.; Zhang H.; Gao M.; et al. Stretchable Micromotion Sensor with Enhanced Sensitivity Using Serpentine Layout. ACS Appl. Mater. Interfaces 2019, 11, 12261–12271. 10.1021/acsami.8b22613. [DOI] [PubMed] [Google Scholar]
- Chang T.-H.; Zhang T.; Yang H.; Li K.; Tian Y.; Lee J. Y.; Chen P.-Y. Controlled Crumpling of Two-Dimensional Titanium Carbide (MXene) for Highly Stretchable, Bendable. Efficient Supercapacitors. ACS Nano 2018, 12, 8048–8059. 10.1021/acsnano.8b02908. [DOI] [PubMed] [Google Scholar]
- Zhang M.; Li J.; Kang L.; Zhang N.; Huang C.; He Y.; Hu M.; Zhou X.; Zhang J. Machine Learning-Guided Design and Development of Multifunctional Flexible Ag/Poly (Amic Acid) Composites Using the Differential Evolution Algorithm. Nanoscale 2020, 12, 3988–3996. 10.1039/C9NR09146G. [DOI] [PubMed] [Google Scholar]
- Muralt P.; Polcawich R. G.; Trolier-McKinstry S. Piezoelectric Thin Films for Sensors, Actuators, and Energy Harvesting. MRS Bull. 2009, 34, 658–664. 10.1557/mrs2009.177. [DOI] [Google Scholar]
- Xu Q.; Gao X.; Zhao S.; Liu Y.; Zhang D.; Zhou K.; Khanbareh H.; Chen W.; Zhang Y.; Bowen C. Construction of Bio-Piezoelectric Platforms: From Structures and Synthesis to Applications. Adv. Mater. 2021, 33, 2008452 10.1002/adma.202008452. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yuan R.; Liu Z.; Balachandran P. V.; Xue D.; Zhou Y.; Ding X.; Sun J.; Xue D.; Lookman T. Accelerated Discovery of Large Electrostrains in BaTiO 3 -Based Piezoelectrics Using Active Learning. Adv. Mater. 2018, 30, 1702884 10.1002/adma.201702884. [DOI] [PubMed] [Google Scholar]
- Li W.; Yang T.; Liu C.; Huang Y.; Chen C.; Pan H.; Xie G.; Tai H.; Jiang Y.; Wu Y.; et al. Optimizing Piezoelectric Nanocomposites by High-Throughput Phase-Field Simulation and Machine Learning. Adv. Sci. 2022, 9, 2105550 10.1002/advs.202105550. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tan Y. J.; Susanto G. J.; Anwar Ali H. P.; Tee B. C. K. Progress and Roadmap for Intelligent Self-Healing Materials in Autonomous Robotics. Adv. Mater. 2021, 33, 2002800 10.1002/adma.202002800. [DOI] [PubMed] [Google Scholar]
- Hwang S.-W.; Song J.-K.; Huang X.; Cheng H.; Kang S.-K.; Kim B. H.; Kim J.-H.; Yu S.; Huang Y.; Rogers J. A. High-Performance Biodegradable/Transient Electronics on Biodegradable Polymers. Adv. Mater. 2014, 26, 3905–3911. 10.1002/adma.201306050. [DOI] [PubMed] [Google Scholar]
- Anwar Ali H. P.; Zhao Z.; Tan Y. J.; Yao W.; Li Q.; Tee B. C. K. Dynamic Modeling of Intrinsic Self-Healing Polymers Using Deep Learning. ACS Appl. Mater. Interfaces 2022, 14, 52486–52498. 10.1021/acsami.2c14543. [DOI] [PubMed] [Google Scholar]
- Cho S.-Y.; Jung H.-T. Artificial Intelligence: A Game Changer in Sensor Research. ACS Sens. 2023, 8, 1371–1372. 10.1021/acssensors.3c00589. [DOI] [PubMed] [Google Scholar]
- Ma C.; Li G.; Qin L.; Huang W.; Zhang H.; Liu W.; Dong T.; Li S.-T. Analytical Model of Micropyramidal Capacitive Pressure Sensors and Machine-Learning-Assisted Design. Adv. Mater. Technol. 2021, 6, 2100634 10.1002/admt.202100634. [DOI] [Google Scholar]
- Scheibert J.; Leurent S.; Prevost A.; Debrégeas G. The Role of Fingerprints in the Coding of Tactile Information Probed with a Biomimetic Sensor. Science 2009, 323, 1503–1506. 10.1126/science.1166467. [DOI] [PubMed] [Google Scholar]
- Park J.; Kim M.; Lee Y.; Lee H. S.; Ko H. Fingertip Skin–Inspired Microstructured Ferroelectric Skins Discriminate Static/Dynamic Pressure and Temperature Stimuli. Sci. Adv. 2015, 1, e1500661 10.1126/sciadv.1500661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chun S.; Choi Y.; Suh D. I.; Bae G. Y.; Hyun S.; Park W. A Tactile Sensor Using Single Layer Graphene for Surface Texture Recognition. Nanoscale 2017, 9, 10248–10255. 10.1039/C7NR03748A. [DOI] [PubMed] [Google Scholar]
- Cao Y.; Li T.; Gu Y.; Luo H.; Wang S.; Zhang T. Fingerprint-Inspired Flexible Tactile Sensor for Accurately Discerning Surface Texture. Small 2018, 14, 1703902 10.1002/smll.201703902. [DOI] [PubMed] [Google Scholar]
- Jung M.; Vishwanath S. K.; Kim J.; Ko D.-K.; Park M.-J.; Lim S.-C.; Jeon S. Transparent and Flexible Mayan-Pyramid-Based Pressure Sensor Using Facile-Transferred Indium Tin Oxide for Bimodal Sensor Applications. Sci. Rep. 2019, 9, 14040. 10.1038/s41598-019-50247-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y.; He B.; Zhou Y.; Lu R.; Wang Z.; Lu P.; Yan Z.. Surface Texture Recognition Network Based on Flexible Electronic Skin. In 2021 40th Chinese Control Conference (CCC); 2021; pp 8721–8726. 10.23919/CCC52363.2021.9549953. [DOI]
- Velik R. An Objective Review of the Technological Developments for Radial Pulse Diagnosis in Traditional Chinese Medicine. Eur. J. Integr. Med. 2015, 7, 321–331. 10.1016/j.eujim.2015.06.006. [DOI] [Google Scholar]
- Huang K.-H.; Tan F.; Wang T.-D.; Yang Y.-J. A Highly Sensitive Pressure-Sensing Array for Blood Pressure Estimation Assisted by Machine-Learning Techniques. Sensors 2019, 19, 848. 10.3390/s19040848. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amit M.; Chukoskie L.; Skalsky A. J.; Garudadri H.; Ng T. N. Flexible Pressure Sensors for Objective Assessment of Motor Disorders. Adv. Funct. Mater. 2020, 30, 1905241 10.1002/adfm.201905241. [DOI] [Google Scholar]
- Yalçın Ç.; Sam M.; Bu Y.; Amit M.; Skalsky A. J.; Yip M.; Ng T. N.; Garudadri H. Artifacts Mitigation in Sensors for Spasticity Assessment. Adv. Intell. Syst. 2021, 3, 2000106 10.1002/aisy.202000106. [DOI] [Google Scholar]
- Bae K.; Jeong J.; Choi J.; Pyo S.; Kim J. Large-Area, Crosstalk-Free, Flexible Tactile Sensor Matrix Pixelated by Mesh Layers. ACS Appl. Mater. Interfaces 2021, 13, 12259–12267. 10.1021/acsami.0c21671. [DOI] [PubMed] [Google Scholar]
- Pang K.; Song X.; Xu Z.; Liu X.; Liu Y.; Zhong L.; Peng Y.; Wang J.; Zhou J.; Meng F.; et al. Hydroplastic Foaming of Graphene Aerogels and Artificially Intelligent Tactile Sensors. Sci. Adv. 2020, 6, eabd4045 10.1126/sciadv.abd4045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li Q.; Zhi X.; Xia Y.; Han S.; Guo W.; Li M.; Wang X. Ultrastretchable High-Conductivity MXene-Based Organohydrogels for Human Health Monitoring and Machine-Learning-Assisted Recognition. ACS Appl. Mater. Interfaces 2023, 15, 19435–19446. 10.1021/acsami.3c00432. [DOI] [PubMed] [Google Scholar]
- Hu Q.; Tang X.; Tang W. A Smart Chair Sitting Posture Recognition System Using Flex Sensors and FPGA Implemented Artificial Neural Network. IEEE Sens. J. 2020, 20, 8007–8016. 10.1109/JSEN.2020.2980207. [DOI] [Google Scholar]
- Sinha A. K.; Goh G. L.; Yeong W. Y.; Cai Y. Ultra-Low-Cost, Crosstalk-Free, Fast-Responding, Wide-Sensing-Range Tactile Fingertip Sensor for Smart Gloves. Adv. Mater. Interfaces 2022, 9, 2200621 10.1002/admi.202200621. [DOI] [Google Scholar]
- Cicek M. O.; Durukan M. B.; Yıldız B.; Keskin D.; Doganay D.; Çınar Aygün S.; Cakir M. P.; Unalan H. E. Ultra-Sensitive Bio-Polymer Iontronic Sensors for Object Recognition from Tactile Feedback. Adv. Mater. Technol. 2023, 8, 2300322 10.1002/admt.202300322. [DOI] [Google Scholar]
- Wang P.; Liu P.; Feng H.; Li Y.; Zhang Q.; Hu R.; Liu C.; Xing K.; Song A.; Yang X.; et al. Flexible and Wireless Normal-Tangential Force Sensor Based on Resonant Mechanism for Robotic Gripping Applications. Adv. Mater. Technol. 2022, 7, 2101385 10.1002/admt.202101385. [DOI] [Google Scholar]
- Zhang Q.; Li Y.; Luo Y.; Shou W.; Foshey M.; Yan J.; Tenenbaum J. B.; Matusik W.; Torralba A.. Dynamic Modeling of Hand-Object Interactions via Tactile Sensing. In 2021 IEEE/RSJ. International Conference on Intelligent Robots and Systems (IROS); 2021; pp 2874–2881. 10.1109/IROS51168.2021.9636361. [DOI]
- Xie Y.; Wu X.; Huang X.; Liang Q.; Deng S.; Wu Z.; Yao Y.; Lu L. A Deep Learning-Enabled Skin-Inspired Pressure Sensor for Complicated Recognition Tasks with Ultralong Life. Research 2023, 6, 0157 10.34133/research.0157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee G.-H.; Park J.-K.; Byun J.; Yang J. C.; Kwon S. Y.; Kim C.; Jang C.; Sim J. Y.; Yook J.-G.; Park S. Parallel Signal Processing of a Wireless Pressure-Sensing Platform Combined with Machine-Learning-Based Cognition, Inspired by the Human Somatosensory System. Adv. Mater. 2020, 32, 1906269 10.1002/adma.201906269. [DOI] [PubMed] [Google Scholar]
- Lee J.-W.; Chung J.; Cho M.-Y.; Timilsina S.; Sohn K.; Kim J. S.; Sohn K.-S. Deep-Learning Technique To Convert a Crude Piezoresistive Carbon Nanotube-Ecoflex Composite Sheet into a Smart, Portable, Disposable, and Extremely Flexible Keypad. ACS Appl. Mater. Interfaces 2018, 10, 20862–20868. 10.1021/acsami.8b04914. [DOI] [PubMed] [Google Scholar]
- Sohn K.-S.; Chung J.; Cho M.-Y.; Timilsina S.; Park W. B.; Pyo M.; Shin N.; Sohn K.; Kim J. S. An Extremely Simple Macroscale Electronic Skin Realized by Deep Machine Learning. Sci. Rep. 2017, 7, 11061. 10.1038/s41598-017-11663-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu B.; Kang S.-Y.; Akthakul A.; Ramadurai N.; Pilkenton M.; Patel A.; Nashat A.; Anderson D. G.; Sakamoto F. H.; Gilchrest B. A.; et al. An Elastic Second Skin. Nat. Mater. 2016, 15, 911–918. 10.1038/nmat4635. [DOI] [PubMed] [Google Scholar]
- Li T.; Su Y.; Chen F.; Zheng H.; Meng W.; Liu Z.; Ai Q.; Liu Q.; Tan Y.; Zhou Z. Bioinspired Stretchable Fiber-Based Sensor toward Intelligent Human-Machine Interactions. ACS Appl. Mater. Interfaces 2022, 14, 22666–22677. 10.1021/acsami.2c05823. [DOI] [PubMed] [Google Scholar]
- Kim K. K.; Ha I.; Kim M.; Choi J.; Won P.; Jo S.; Ko S. H. A Deep-Learned Skin Sensor Decoding the Epicentral Human Motions. Nat. Commun. 2020, 11, 2149. 10.1038/s41467-020-16040-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee S.; Hinchet R.; Lee Y.; Yang Y.; Lin Z.-H.; Ardila G.; Montès L.; Mouis M.; Wang Z. L. Ultrathin Nanogenerators as Self-Powered/Active Skin Sensors for Tracking Eye Ball Motion. Adv. Funct. Mater. 2014, 24, 1163–1168. 10.1002/adfm.201301971. [DOI] [Google Scholar]
- Zhuang M.; Yin L.; Wang Y.; Bai Y.; Zhan J.; Hou C.; Yin L.; Xu Z.; Tan X.; Huang Y. Highly Robust and Wearable Facial Expression Recognition via Deep-Learning-Assisted, Soft Epidermal Electronics. Research 2021, 2021, 9759601 10.34133/2021/9759601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoo H.; Kim E.; Chung J. W.; Cho H.; Jeong S.; Kim H.; Jang D.; Kim H.; Yoon J.; Lee G. H.; et al. Silent Speech Recognition with Strain Sensors and Deep Learning Analysis of Directional Facial Muscle Movement. ACS Appl. Mater. Interfaces 2022, 14, 54157–54169. 10.1021/acsami.2c14918. [DOI] [PubMed] [Google Scholar]
- Jiang Y.; Sadeqi A.; Miller E. L.; Sonkusale S. Head Motion Classification Using Thread-Based Sensor and Machine Learning Algorithm. Sci. Rep. 2021, 11, 2646. 10.1038/s41598-021-81284-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polat B.; Becerra L. L.; Hsu P.-Y.; Kaipu V.; Mercier P. P.; Cheng C.-K.; Lipomi D. J. Epidermal Graphene Sensors and Machine Learning for Estimating Swallowed Volume. ACS Appl. Nano Mater. 2021, 4, 8126–8134. 10.1021/acsanm.1c01378. [DOI] [Google Scholar]
- Long Y.; He P.; Xu R.; Hayasaka T.; Shao Z.; Zhong J.; Lin L. Molybdenum-Carbide-Graphene Composites for Paper-Based Strain and Acoustic Pressure Sensors. Carbon 2020, 157, 594–601. 10.1016/j.carbon.2019.10.083. [DOI] [Google Scholar]
- Huang J.; Chen A.; Han S.; Wu Q.; Zhu J.; Zhang J.; Chen Y.; Liu J.; Guan L. Tough and Robust Mechanically Interlocked Gel–Elastomer Hybrid Electrode for Soft Strain Gauge. Adv. Sci. 2023, 10, 2301116 10.1002/advs.202301116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L.; Cheng W.; Pan L.; Gu T.; Wu T.; Tao X.; Lu J. SpiderWalk: Circumstance-Aware Transportation Activity Detection Using a Novel Contact Vibration Sensor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–30. 10.1145/3191774. [DOI] [Google Scholar]
- Yang H.; Li J.; Xiao X.; Wang J.; Li Y.; Li K.; Li Z.; Yang H.; Wang Q.; Yang J.; et al. Topographic Design in Wearable MXene Sensors with In-Sensor Machine Learning for Full-Body Avatar Reconstruction. Nat. Commun. 2022, 13, 5311. 10.1038/s41467-022-33021-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maurya D.; Khaleghian S.; Sriramdas R.; Kumar P.; Kishore R. A.; Kang M. G.; Kumar V.; Song H.-C.; Lee S.-Y.; Yan Y.; et al. 3D Printed Graphene-Based Self-Powered Strain Sensors for Smart Tires in Autonomous Vehicles. Nat. Commun. 2020, 11, 5392. 10.1038/s41467-020-19088-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim T.; Hong I.; Kim M.; Im S.; Roh Y.; Kim C.; Lim J.; Kim D.; Park J.; Lee S.; et al. Ultra-Stable and Tough Bioinspired Crack-Based Tactile Sensor for Small Legged Robots. Npj Flex. Electron. 2023, 7, 22. 10.1038/s41528-023-00255-2. [DOI] [Google Scholar]
- Ma Y.; Zhang D.; Wang Z.; Zhang H.; Xia H.; Mao R.; Cai H.; Luan H. Self-Adhesive, Anti-Freezing MXene-Based Hydrogel Strain Sensor for Motion Monitoring and Handwriting Recognition with Deep Learning. ACS Appl. Mater. Interfaces 2023, 15, 29413–29424. 10.1021/acsami.3c02014. [DOI] [PubMed] [Google Scholar]
- Yang C.; Zhang D.; Wang D.; Luan H.; Chen X.; Yan W. In Situ Polymerized MXene/Polypyrrole/Hydroxyethyl Cellulose-Based Flexible Strain Sensor Enabled by Machine Learning for Handwriting Recognition. ACS Appl. Mater. Interfaces 2023, 15, 5811–5821. 10.1021/acsami.2c18989. [DOI] [PubMed] [Google Scholar]
- Zavareh A.; Tran B.; Orred C.; Rhodes S.; Rahman M. S.; Namkoong M.; Lee R.; Carlisle C.; Rosas M.; Pavlov A.; et al. Soft Wearable Thermal Devices Integrated with Machine Learning. Adv. Mater. Technol. 2023, 8, 2300206 10.1002/admt.202300206. [DOI] [Google Scholar]
- Wang Z. L.; Song J. Piezoelectric Nanogenerators Based on Zinc Oxide Nanowire Arrays. Science 2006, 312, 242–246. 10.1126/science.1124005. [DOI] [PubMed] [Google Scholar]
- Fan F.-R.; Tian Z.-Q.; Lin Wang Z. Flexible Triboelectric Generator. Nano Energy 2012, 1, 328–334. 10.1016/j.nanoen.2012.01.004. [DOI] [Google Scholar]
- Oddo C. M.; Controzzi M.; Beccai L.; Cipriani C.; Carrozza M. C. Roughness Encoding for Discrimination of Surfaces in Artificial Active-Touch. IEEE Trans. Robot. 2011, 27, 522–533. 10.1109/TRO.2011.2116930. [DOI] [Google Scholar]
- Xing P.; An S.; Wu Y.; Li G.; Liu S.; Wang J.; Cheng Y.; Zhang Y.; Pu X. A Triboelectric Tactile Sensor with Flower-Shaped Holes for Texture Recognition. Nano Energy 2023, 116, 108758 10.1016/j.nanoen.2023.108758. [DOI] [Google Scholar]
- Zhao X.; Zhang Z.; Xu L.; Gao F.; Zhao B.; Ouyang T.; Kang Z.; Liao Q.; Zhang Y. Fingerprint-Inspired Electronic Skin Based on Triboelectric Nanogenerator for Fine Texture Recognition. Nano Energy 2021, 85, 106001 10.1016/j.nanoen.2021.106001. [DOI] [Google Scholar]
- Huang J.; Yang X.; Yu J.; Han J.; Jia C.; Ding M.; Sun J.; Cao X.; Sun Q.; Wang Z. L. A Universal and Arbitrary Tactile Interactive System Based on Self-Powered Optical Communication. Nano Energy 2020, 69, 104419 10.1016/j.nanoen.2019.104419. [DOI] [Google Scholar]
- Yu J.; Wang Y.; Qin S.; Gao G.; Xu C.; Lin Wang Z.; Sun Q. Bioinspired Interactive Neuromorphic Devices. Mater. Today 2022, 60, 158–182. 10.1016/j.mattod.2022.09.012. [DOI] [Google Scholar]
- Xue J.; Zou Y.; Deng Y.; Li Z. Bioinspired Sensor System for Health Care and Human-Machine Interaction. EcoMat 2022, 4, e12209 10.1002/eom2.12209. [DOI] [Google Scholar]
- Fang H.; Wang L.; Fu Z.; Xu L.; Guo W.; Huang J.; Wang Z. L.; Wu H. Anatomically Designed Triboelectric Wristbands with Adaptive Accelerated Learning for Human–Machine Interfaces. Adv. Sci. 2023, 10, 2205960 10.1002/advs.202205960. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wen F.; Sun Z.; He T.; Shi Q.; Zhu M.; Zhang Z.; Li L.; Zhang T.; Lee C. Machine Learning Glove Using Self-Powered Conductive Superhydrophobic Triboelectric Textile for Gesture Recognition in VR/AR Applications. Adv. Sci. 2020, 7, 2000261 10.1002/advs.202000261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Babu A.; Ranpariya S.; Sinha D. K.; Mandal D. Deep Learning Enabled Perceptive Wearable Sensor: An Interactive Gadget for Tracking Movement Disorder. Adv. Mater. Technol. 2023, 8, 2300046 10.1002/admt.202300046. [DOI] [Google Scholar]
- Yang J.; Liu S.; Meng Y.; Xu W.; Liu S.; Jia L.; Chen G.; Qin Y.; Han M.; Li X. Self-Powered Tactile Sensor for Gesture Recognition Using Deep Learning Algorithms. ACS Appl. Mater. Interfaces 2022, 14, 25629–25637. 10.1021/acsami.2c01730. [DOI] [PubMed] [Google Scholar]
- Kang H.; Zhao C.; Huang J.; Ho D. H.; Megra Y. T.; Suk J. W.; Sun J.; Wang Z. L.; Sun Q.; Cho J. H. Fingerprint-Inspired Conducting Hierarchical Wrinkles for Energy-Harvesting E-Skin. Adv. Funct. Mater. 2019, 29, 1903580 10.1002/adfm.201903580. [DOI] [Google Scholar]
- Wang G.; Liu X.; Wang Y.; Zheng Z.; Zhu Z.; Yin Y.; Zhu L.; Wang X. Energy Harvesting and Sensing Integrated Woven Structure Kneepad Based on Triboelectric Nanogenerators. Adv. Mater. Technol. 2023, 8, 2200973 10.1002/admt.202200973. [DOI] [Google Scholar]
- An S.; Pu X.; Zhou S.; Wu Y.; Li G.; Xing P.; Zhang Y.; Hu C. Deep Learning Enabled Neck Motion Detection Using a Triboelectric Nanogenerator. ACS Nano 2022, 16, 9359–9367. 10.1021/acsnano.2c02149. [DOI] [PubMed] [Google Scholar]
- Xiong Y.; Luo L.; Yang J.; Han J.; Liu Y.; Jiao H.; Wu S.; Cheng L.; Feng Z.; Sun J.; et al. Scalable Spinning, Winding, and Knitting Graphene Textile TENG for Energy Harvesting and Human Motion Recognition. Nano Energy 2023, 107, 108137 10.1016/j.nanoen.2022.108137. [DOI] [Google Scholar]
- Ji X.; Zhao T.; Zhao X.; Lu X.; Li T. Triboelectric Nanogenerator Based Smart Electronics via Machine Learning. Adv. Mater. Technol. 2020, 5, 1900921 10.1002/admt.201900921. [DOI] [Google Scholar]
- Xie J.; Zhao Y.; Zhu D.; Yan J.; Li J.; Qiao M.; He G.; Deng S. A Machine Learning-Combined Flexible Sensor for Tactile Detection and Voice Recognition. ACS Appl. Mater. Interfaces 2023, 15, 12551–12559. 10.1021/acsami.2c22287. [DOI] [PubMed] [Google Scholar]
- Shi Q.; Zhang Z.; He T.; Sun Z.; Wang B.; Feng Y.; Shan X.; Salam B.; Lee C. Deep Learning Enabled Smart Mats as a Scalable Floor Monitoring System. Nat. Commun. 2020, 11, 4609. 10.1038/s41467-020-18471-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim K.; Sim M.; Lim S.-H.; Kim D.; Lee D.; Shin K.; Moon C.; Choi J.-W.; Jang J. E. Tactile Avatar: Tactile Sensing System Mimicking Human Tactile Cognition. Adv. Sci. 2021, 8, 2002362 10.1002/advs.202002362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Z.; Shi Q.; He T.; Guo X.; Dong B.; Lee J.; Lee C. Artificial Intelligence of Toilet (AI-Toilet) for an Integrated Health Monitoring System (IHMS) Using Smart Triboelectric Pressure Sensors and Image Sensor. Nano Energy 2021, 90, 106517 10.1016/j.nanoen.2021.106517. [DOI] [Google Scholar]
- Babu A.; Ranpariya S.; Sinha D. K.; Chatterjee A.; Mandal D. Deep Learning Enabled Early Predicting Cardiovascular Status Using Highly Sensitive Piezoelectric Sensor of Solution-Processable Nylon-11. Adv. Mater. Technol. 2023, 8, 2202021 10.1002/admt.202202021. [DOI] [Google Scholar]
- Kachuee M.; Kiani M. M.; Mohammadzade H.; Shabany M. Cuffless Blood Pressure Estimation Algorithms for Continuous Health-Care Monitoring. IEEE Trans. Biomed. Eng. 2017, 64, 859–869. 10.1109/TBME.2016.2580904. [DOI] [PubMed] [Google Scholar]
- Wei X.; Wang B.; Wu Z.; Wang Z. L. An Open-Environment Tactile Sensing System: Toward Simple and Efficient Material Identification. Adv. Mater. 2022, 34, 2203073 10.1002/adma.202203073. [DOI] [PubMed] [Google Scholar]
- Cao L.; Ye C.; Zhang H.; Yang S.; Shan Y.; Lv Z.; Ren J.; Ling S. An Artificial Motion and Tactile Receptor Constructed by Hyperelastic Double Physically Cross-Linked Silk Fibroin Ionoelastomer. Adv. Funct. Mater. 2023, 33, 2301404 10.1002/adfm.202301404. [DOI] [Google Scholar]
- Liu D.; Zhou L.; Cui S.; Gao Y.; Li S.; Zhao Z.; Yi Z.; Zou H.; Fan Y.; Wang J.; et al. Standardized Measurement of Dielectric Materials’ Intrinsic Triboelectric Charge Density through the Suppression of Air Breakdown. Nat. Commun. 2022, 13, 6019. 10.1038/s41467-022-33766-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meng Y.; Zhao J.; Yang X.; Zhao C.; Qin S.; Cho J. H.; Zhang C.; Sun Q.; Wang Z. L. Mechanosensation-Active Matrix Based on Direct-Contact Tribotronic Planar Graphene Transistor Array. ACS Nano 2018, 12, 9381–9389. 10.1021/acsnano.8b04490. [DOI] [PubMed] [Google Scholar]
- Shi Q.; Sun Z.; Le X.; Xie J.; Lee C. Soft Robotic Perception System with Ultrasonic Auto-Positioning and Multimodal Sensory Intelligence. ACS Nano 2023, 17, 4985–4998. 10.1021/acsnano.2c12592. [DOI] [PubMed] [Google Scholar]
- Jin T.; Sun Z.; Li L.; Zhang Q.; Zhu M.; Zhang Z.; Yuan G.; Chen T.; Tian Y.; Hou X.; et al. Triboelectric Nanogenerator Sensors for Soft Robotics Aiming at Digital Twin Applications. Nat. Commun. 2020, 11, 5381. 10.1038/s41467-020-19059-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bokka N.; Selamneni V.; Sahatiya P. A Water Destructible SnS2 QD/PVA Film Based Transient Multifunctional Sensor and Machine Learning Assisted Stimulus Identification for Non-Invasive Personal Care Diagnostics. Mater. Adv. 2020, 1, 2818–2830. 10.1039/D0MA00573H. [DOI] [Google Scholar]
- Li G.; Zhu R. A Multisensory Tactile System for Robotic Hands to Recognize Objects. Adv. Mater. Technol. 2019, 4, 1900602 10.1002/admt.201900602. [DOI] [Google Scholar]
- Hughes J.; Spielberg A.; Chounlakone M.; Chang G.; Matusik W.; Rus D. A Simple, Inexpensive, Wearable Glove with Hybrid Resistive-Pressure Sensors for Computational Sensing, Proprioception, and Task Identification. Adv. Intell. Syst. 2020, 2, 2000002 10.1002/aisy.202000002. [DOI] [Google Scholar]
- Zhu J.; Zhang X.; Wang R.; Wang M.; Chen P.; Cheng L.; Wu Z.; Wang Y.; Liu Q.; Liu M. A Heterogeneously Integrated Spiking Neuron Array for Multimode-Fused Perception and Object Classification. Adv. Mater. 2022, 34, 2200481 10.1002/adma.202200481. [DOI] [PubMed] [Google Scholar]
- Qiu Y.; Sun S.; Wang X.; Shi K.; Wang Z.; Ma X.; Zhang W.; Bao G.; Tian Y.; Zhang Z.; et al. Nondestructive Identification of Softness via Bioinspired Multisensory Electronic Skins Integrated on a Robotic Hand. Npj Flex. Electron. 2022, 6, 45. 10.1038/s41528-022-00181-9. [DOI] [Google Scholar]
- Chun S.; Son W.; Kim H.; Lim S. K.; Pang C.; Choi C. Self-Powered Pressure- and Vibration-Sensitive Tactile Sensors for Learning Technique-Based Neural Finger Skin. Nano Lett. 2019, 19, 3305–3312. 10.1021/acs.nanolett.9b00922. [DOI] [PubMed] [Google Scholar]
- Sun Z.; Zhu M.; Zhang Z.; Chen Z.; Shi Q.; Shan X.; Yeow R. C. H.; Lee C. Artificial Intelligence of Things (AIoT) Enabled Virtual Shop Applications Using Self-Powered Sensor Enhanced Soft Robotic Manipulator. Adv. Sci. 2021, 8, 2100230 10.1002/advs.202100230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang C.; Wang H.; Yang J.; Yao H.; He T.; Bai J.; Guang T.; Cheng H.; Yan J.; Qu L. A Machine-Learning-Enhanced Simultaneous and Multimodal Sensor Based on Moist-Electric Powered Graphene Oxide. Adv. Mater. 2022, 34, 2205249 10.1002/adma.202205249. [DOI] [PubMed] [Google Scholar]
- Pan J.; Luo Y.; Li Y.; Tham C.-K.; Heng C.-H.; Thean A. V.-Y. A Wireless Multi-Channel Capacitive Sensor System for Efficient Glove-Based Gesture Recognition With AI at the Edge. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1624–1628. 10.1109/TCSII.2020.3010318. [DOI] [Google Scholar]
- Moin A.; Zhou A.; Rahimi A.; Menon A.; Benatti S.; Alexandrov G.; Tamakloe S.; Ting J.; Yamamoto N.; Khan Y.; et al. A Wearable Biosensing System with In-Sensor Adaptive Machine Learning for Hand Gesture Recognition. Nat. Electron. 2021, 4, 54–63. 10.1038/s41928-020-00510-8. [DOI] [Google Scholar]
- Wang L.; Yi Z.; Zhao Y.; Liu Y.; Wang S. Stretchable Conductors for Stretchable Field-Effect Transistors and Functional Circuits. Chem. Soc. Rev. 2023, 52, 795–835. 10.1039/D2CS00837H. [DOI] [PubMed] [Google Scholar]
- Dai Y.; Hu H.; Wang M.; Xu J.; Wang S. Stretchable Transistors and Functional Circuits for Human-Integrated Electronics. Nat. Electron. 2021, 4, 17–29. 10.1038/s41928-020-00513-5. [DOI] [Google Scholar]
- Wu F.; Liu Y.; Zhang J.; Duan S.; Ji D.; Yang H. Recent Advances in High-Mobility and High-Stretchability Organic Field-Effect Transistors: From Materials, Devices to Applications. Small Methods 2021, 5, 2100676 10.1002/smtd.202100676. [DOI] [PubMed] [Google Scholar]
- Xu X.; Zhao Y.; Liu Y. Wearable Electronics Based on Stretchable Organic Semiconductors. Small 2023, 19, 2206309 10.1002/smll.202206309. [DOI] [PubMed] [Google Scholar]
- Weng Y.; Yu Z.; Wu T.; Liang L.; Liu S. Recent Progress in Stretchable Organic Field-Effect Transistors: Key Materials, Fabrication and Applications. New J. Chem. 2023, 47, 5086–5109. 10.1039/D2NJ06190B. [DOI] [Google Scholar]
- Mehonic A.; Sebastian A.; Rajendran B.; Simeone O.; Vasilaki E.; Kenyon A. J. Memristors—From In-Memory Computing, Deep Learning Acceleration, and Spiking Neural Networks to the Future of Neuromorphic and Bio-Inspired Computing. Adv. Intell. Syst. 2020, 2, 2000085 10.1002/aisy.202000085. [DOI] [Google Scholar]
- Tang B.; Veluri H.; Li Y.; Yu Z. G.; Waqar M.; Leong J. F.; Sivan M.; Zamburg E.; Zhang Y.-W.; Wang J.; et al. Wafer-Scale Solution-Processed 2D Material Analog Resistive Memory Array for Memory-Based Computing. Nat. Commun. 2022, 13, 3037. 10.1038/s41467-022-30519-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wan H.; Cao Y.; Lo L.-W.; Zhao J.; Sepúlveda N.; Wang C. Flexible Carbon Nanotube Synaptic Transistor for Neurological Electronic Skin Applications. ACS Nano 2020, 14, 10402–10412. 10.1021/acsnano.0c04259. [DOI] [PubMed] [Google Scholar]
- Zang Y.; Shen H.; Huang D.; Di C.-A.; Zhu D. A Dual-Organic-Transistor-Based Tactile-Perception System with Signal-Processing Functionality. Adv. Mater. 2017, 29, 1606088 10.1002/adma.201606088. [DOI] [PubMed] [Google Scholar]
- Yang X.; Yu J.; Zhao J.; Chen Y.; Gao G.; Wang Y.; Sun Q.; Wang Z. L. Mechanoplastic Tribotronic Floating-Gate Neuromorphic Transistor. Adv. Funct. Mater. 2020, 30, 2002506 10.1002/adfm.202002506. [DOI] [Google Scholar]
- Zhou Y.; Fu J.; Chen Z.; Zhuge F.; Wang Y.; Yan J.; Ma S.; Xu L.; Yuan H.; Chan M.; et al. Computational Event-Driven Vision Sensors for in-Sensor Spiking Neural Networks. Nat. Electron. 2023, 6, 870–878. 10.1038/s41928-023-01055-2. [DOI] [Google Scholar]
- Yu J.; Yang X.; Gao G.; Xiong Y.; Wang Y.; Han J.; Chen Y.; Zhang H.; Sun Q.; Wang Z. L. Bioinspired Mechano-Photonic Artificial Synapse Based on Graphene/MoS2 Heterostructure. Sci. Adv. 2021, 7, eabd9117 10.1126/sciadv.abd9117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu J.; Gao G.; Huang J.; Yang X.; Han J.; Zhang H.; Chen Y.; Zhao C.; Sun Q.; Wang Z. L. Contact-Electrification-Activated Artificial Afferents at Femtojoule Energy. Nat. Commun. 2021, 12, 1581. 10.1038/s41467-021-21890-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim Y.; Chortos A.; Xu W.; Liu Y.; Oh J. Y.; Son D.; Kang J.; Foudeh A. M.; Zhu C.; Lee Y.; et al. A Bioinspired Flexible Organic Artificial Afferent Nerve. Science 2018, 360, 998–1003. 10.1126/science.aao0098. [DOI] [PubMed] [Google Scholar]
- Zhu Y.; Wu C.; Xu Z.; Liu Y.; Hu H.; Guo T.; Kim T. W.; Chai Y.; Li F. Light-Emitting Memristors for Optoelectronic Artificial Efferent Nerve. Nano Lett. 2021, 21, 6087–6094. 10.1021/acs.nanolett.1c01482. [DOI] [PubMed] [Google Scholar]
- Shan X.; Zhao C.; Wang X.; Wang Z.; Fu S.; Lin Y.; Zeng T.; Zhao X.; Xu H.; Zhang X.; et al. Plasmonic Optoelectronic Memristor Enabling Fully Light-Modulated Synaptic Plasticity for Neuromorphic Vision. Adv. Sci. 2022, 9, 2104632 10.1002/advs.202104632. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y.; Gong Y.; Yang L.; Xiong Z.; Lv Z.; Xing X.; Zhou Y.; Zhang B.; Su C.; Liao Q.; et al. MXene-ZnO Memristor for Multimodal in-Sensor Computing. Adv. Funct. Mater. 2021, 31, 2100144 10.1002/adfm.202100144. [DOI] [Google Scholar]
- Chen D.; Zhi X.; Xia Y.; Li S.; Xi B.; Zhao C.; Wang X. A Digital–Analog Bimodal Memristor Based on CsPbBr3 for Tactile Sensory Neuromorphic Computing. Small 2023, 19, 2301196 10.1002/smll.202301196. [DOI] [PubMed] [Google Scholar]
- Jiang C.; Tan D.; Sun N.; Huang J.; Ji R.; Li Q.; Bi S.; Guo Q.; Wang X.; Song J. 60 Nm Pixel-Size Pressure Piezo-Memory System as Ultrahigh-Resolution Neuromorphic Tactile Sensor for in-Chip Computing. Nano Energy 2021, 87, 106190 10.1016/j.nanoen.2021.106190. [DOI] [Google Scholar]
- Tan H.; Tao Q.; Pande I.; Majumdar S.; Liu F.; Zhou Y.; Persson P. O. Å.; Rosen J.; van Dijken S. Tactile Sensory Coding and Learning with Bio-Inspired Optoelectronic Spiking Afferent Nerves. Nat. Commun. 2020, 11, 1369. 10.1038/s41467-020-15105-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee Y.; Oh J. Y.; Xu W.; Kim O.; Kim T. R.; Kang J.; Kim Y.; Son D.; Tok J. B.-H.; Park M. J.; et al. Stretchable Organic Optoelectronic Sensorimotor Synapse. Sci. Adv. 2018, 4, eaat7387 10.1126/sciadv.aat7387. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Molina-Lopez F.; Gao T. Z.; Kraft U.; Zhu C.; Öhlund T.; Pfattner R.; Feig V. R.; Kim Y.; Wang S.; Yun Y.; et al. Inkjet-Printed Stretchable and Low Voltage Synaptic Transistor Array. Nat. Commun. 2019, 10, 2676. 10.1038/s41467-019-10569-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shim H.; Sim K.; Ershad F.; Yang P.; Thukral A.; Rao Z.; Kim H.-J.; Liu Y.; Wang X.; Gu G.; et al. Stretchable Elastic Synaptic Transistors for Neurologically Integrated Soft Engineering Systems. Sci. Adv. 2019, 5, eaax4961 10.1126/sciadv.aax4961. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dai S.; Dai Y.; Zhao Z.; Xia F.; Li Y.; Liu Y.; Cheng P.; Strzalka J.; Li S.; Li N.; et al. Intrinsically Stretchable Neuromorphic Devices for On-Body Processing of Health Data with Artificial Intelligence. Matter 2022, 5, 3375–3390. 10.1016/j.matt.2022.07.016. [DOI] [Google Scholar]
- Yang M.; Zhao X.; Tang Q.; Cui N.; Wang Z.; Tong Y.; Liu Y. Stretchable and Conformable Synapse Memristors for Wearable and Implantable Electronics. Nanoscale 2018, 10, 18135–18144. 10.1039/C8NR05336G. [DOI] [PubMed] [Google Scholar]
- Wang T.; Cui Z.; Liu Y.; Lu D.; Wang M.; Wan C.; Leow W. R.; Wang C.; Pan L.; Cao X.; et al. Mechanically Durable Memristor Arrays Based on a Discrete Structure Design. Adv. Mater. 2022, 34, 2106212 10.1002/adma.202106212. [DOI] [PubMed] [Google Scholar]
- Wan C.; Cai P.; Guo X.; Wang M.; Matsuhisa N.; Yang L.; Lv Z.; Luo Y.; Loh X. J.; Chen X. An Artificial Sensory Neuron with Visual-Haptic Fusion. Nat. Commun. 2020, 11, 4602. 10.1038/s41467-020-18375-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wan H.; Zhao J.; Lo L.-W.; Cao Y.; Sepúlveda N.; Wang C. Multimodal Artificial Neurological Sensory-Memory System Based on Flexible Carbon Nanotube Synaptic Transistor. ACS Nano 2021, 15, 14587. 10.1021/acsnano.1c04298. [DOI] [PubMed] [Google Scholar]
- Fang Y.; Zou Y.; Xu J.; Chen G.; Zhou Y.; Deng W.; Zhao X.; Roustaei M.; Hsiai T. K.; Chen J. Ambulatory Cardiovascular Monitoring Via a Machine-Learning-Assisted Textile Triboelectric Sensor. Adv. Mater. 2021, 33, 2104178 10.1002/adma.202104178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chung H. U.; Rwei A. Y.; Hourlier-Fargette A.; Xu S.; Lee K.; Dunne E. C.; Xie Z.; Liu C.; Carlini A.; Kim D. H.; et al. Skin-Interfaced Biosensors for Advanced Wireless Physiological Monitoring in Neonatal and Pediatric Intensive-Care Units. Nat. Med. 2020, 26, 418–429. 10.1038/s41591-020-0792-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Osborn L. E.; Dragomir A.; Betthauser J. L.; Hunt C. L.; Nguyen H. H.; Kaliki R. R.; Thakor N. V. Prosthesis with Neuromorphic Multilayered E-Dermis Perceives Touch and Pain. Sci. Robot. 2018, 3, eaat3818 10.1126/scirobotics.aat3818. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cole J. M. A Design-to-Device Pipeline for Data-Driven Materials Discovery. Acc. Chem. Res. 2020, 53, 599–610. 10.1021/acs.accounts.9b00470. [DOI] [PubMed] [Google Scholar]
- Pyzer-Knapp E. O.; Pitera J. W.; Staar P. W. J.; Takeda S.; Laino T.; Sanders D. P.; Sexton J.; Smith J. R.; Curioni A. Accelerating Materials Discovery Using Artificial Intelligence, High Performance Computing and Robotics. Npj Comput. Mater. 2022, 8, 84. 10.1038/s41524-022-00765-z. [DOI] [Google Scholar]
- Gregoire J. M.; Zhou L.; Haber J. A. Combinatorial Synthesis for AI-Driven Materials Discovery. Nat. Synth. 2023, 2, 493–504. 10.1038/s44160-023-00251-4. [DOI] [Google Scholar]
- Zhu Z.; Ng D. W. H.; Park H. S.; McAlpine M. C. 3D-Printed Multifunctional Materials Enabled by Artificial-Intelligence-Assisted Fabrication Technologies. Nat. Rev. Mater. 2021, 6, 27–47. 10.1038/s41578-020-00235-2. [DOI] [Google Scholar]
- Lee N. A.; Shen S. C.; Buehler M. J. An Automated Biomateriomics Platform for Sustainable Programmable Materials Discovery. Matter 2022, 5, 3597–3613. 10.1016/j.matt.2022.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ballard Z.; Brown C.; Madni A. M.; Ozcan A. Machine Learning and Computation-Enabled Intelligent Sensor Design. Nat. Mach. Intell. 2021, 3, 556–565. 10.1038/s42256-021-00360-9. [DOI] [Google Scholar]
- Crawford K. Generative AI’s Environmental Costs Are Soaring — and Mostly Secret. Nature 2024, 626, 693–693. 10.1038/d41586-024-00478-x. [DOI] [PubMed] [Google Scholar]
- Generative AI’s Energy Problem Today Is Foundational - IEEE Spectrum. https://spectrum.ieee.org/ai-energy-consumption (accessed 2024-05-02).
- Leffer L.The AI Boom Could Use a Shocking Amount of Electricity. Scientific American. https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/ (accessed 2024-05-02).
- Covi E.; Donati E.; Liang X.; Kappel D.; Heidari H.; Payvand M.; Wang W. Adaptive Extreme Edge Computing for Wearable Devices. Front. Neurosci. 2021, 15, 611300 10.3389/fnins.2021.611300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qaim W. B.; Ometov A.; Molinaro A.; Lener I.; Campolo C.; Lohan E. S.; Nurmi J. Towards Energy Efficiency in the Internet of Wearable Things: A Systematic Review. IEEE ACCESS 2020, 8, 175412–175435. 10.1109/ACCESS.2020.3025270. [DOI] [Google Scholar]
- Choi J.; Oh S.; Kim J.. The Useful Quantum Computing Techniques for Artificial Intelligence Engineers. In 2020 34TH International Conference on Information Networking (ICOIN 2020); IEEE: New York, 2020; pp 1–3. 10.1109/icoin48656.2020.9016555. [DOI]
- Sankhala V. S.; Agarwal A. Quantum AI - Prospects and Challenges. NONLINEAR Opt. QUANTUM Opt.-CONCEPTS Mod. Opt 2023, 58, 213–238. [Google Scholar]
- Smith A.; Anderson J.. AI, Robotics, and the Future of Jobs. Pew Research Center. https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/ (accessed 2024-05-03).
- The Future of Jobs in the Era of AI. BCG Global. https://www.bcg.com/publications/2021/impact-of-new-technologies-on-jobs (accessed 2024-05-03).
- Adah W. A.; Ikumapayi N. A.; Muhammed H. B.. The Ethical Implications of Advanced Artificial General Intelligence: Ensuring Responsible AI Development and Deployment. SSRN Prepr. 102139ssrn4457301 2023. 10.2139/ssrn.4457301. [DOI]
- Lobel O.The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future; PublicAffairs: New York, 2022. [Google Scholar]
- Grand View Research . Electronic Skin Market Size & Share Analysis Report, 2029. https://www.grandviewresearch.com/industry-analysis/electronic-skin-market (accessed 2024-05-03).
- Verified Market Research . Electronic Skin Market Size, Share, Trends, Opportunities & Forecast. https://www.verifiedmarketresearch.com/product/electronic-skin-market/ (accessed 2024-05-03).
- FACTUAL Market Research . Electronic Skin Market Size, Share, Growth Analysis, Trends, Forecast and Industry Reports. https://www.factualmarketresearch.com/Reports/Electronic-Skin-Market (accessed 2024-05-03).
- IMARC Group . Electronic Skin Market Growth, Size, Report 2024–2032. https://www.imarcgroup.com/electronic-skin-market (accessed 2024-05-03).
- Zhang J.; Li J.; Cheng W.; Zhang J.-H.; Zhou Z.; Sun X.; Li L.; Liang J.-G.; Shi Y.; Pan L. Challenges in Materials and Devices of Electronic Skin. Acs Mater. Lett. 2022, 4, 577–599. 10.1021/acsmaterialslett.1c00799. [DOI] [Google Scholar]
- Khan A. S.; Khan F. U. A Survey of Wearable Energy Harvesting Systems. Int. J. ENERGY Res. 2022, 46, 2277–2329. 10.1002/er.7394. [DOI] [Google Scholar]
- Vallem V.; Sargolzaeiaval Y.; Ozturk M.; Lai Y.-C.; Dickey M. D. Energy Harvesting and Storage with Soft and Stretchable Materials. Adv. Mater. 2021, 33, 2004832 10.1002/adma.202004832. [DOI] [PubMed] [Google Scholar]
- Ke H.; Gao M.; Li S.; Qi Q.; Zhao W.; Li X.; Li S.; Kuvondikov V.; Lv P.; Wei Q. Advances and Future Prospects of Wearable Textile- and Fiber-Based Solar Cells. Sol. RRL 2023, 7, 2300109. 10.1002/solr.202300109. [DOI] [Google Scholar]
- Wang Z. L. Triboelectric Nanogenerators as New Energy Technology for Self-Powered Systems and as Active Mechanical and Chemical Sensors. ACS Nano 2013, 7, 9533–9557. 10.1021/nn404614z. [DOI] [PubMed] [Google Scholar]
- Park H.; Lee D.; Park G.; Park S.; Khan S.; Kim J.; Kim W. Energy Harvesting Using Thermoelectricity for IoT (Internet of Things) and E-Skin Sensors. J. Phys.-ENERGY 2019, 1, 042001 10.1088/2515-7655/ab2f1e. [DOI] [Google Scholar]
- Yang W.; Lin S.; Gong W.; Lin R.; Jiang C.; Yang X.; Hu Y.; Wang J.; Xiao X.; Li K.; et al. Single Body-Coupled Fiber Enables Chipless Textile Electronics. Science 2024, 384, 74–81. 10.1126/science.adk3755. [DOI] [PubMed] [Google Scholar]
- Thomas A. M. Moisture permeability, diffusion and sorption in organic film-forming materials. J. Appl. Chem. 1951, 1, 141–158. 10.1002/jctb.5010010401. [DOI] [Google Scholar]
- Chan B. Q. Y.; Low Z. W. K.; Heng S. J. W.; Chan S. Y.; Owh C.; Loh X. J. Recent Advances in Shape Memory Soft Materials for Biomedical Applications. ACS Appl. Mater. INTERFACES 2016, 8, 10070–10087. 10.1021/acsami.6b01295. [DOI] [PubMed] [Google Scholar]
- The number of articles on e-skins with AI was sourced from Web of Science Core Collection on May 5, 2024, using the following queries: TS = (electronic skin* OR e-skin* OR flexible sens* OR wearable sens*) AND TS = (artificial intelligence* OR machine learn* OR machine intelligence* OR neuromorphic) AND PY = (1900-2019) and TS = (electronic skin* OR e-skin* OR flexible sens* OR wearable sens*) AND TS = (artificial intelligence* OR machine learn* OR machine intelligence* OR neuromorphic) AND PY = (1900-2024).


















