Abstract
Human–robot collaboration is playing more and more important roles in current deployments of robotic systems in our lives. Haptic perception and intelligent control are essential to ensure safety and efficiency of human–robot interaction. However, existing robotic sensory and control systems are deficient in terms of performance issues, complexity, and cost. Here, the authors report a multifunctional electronic skin (e‐skin) incorporating multiple perceptions with intelligent robotic control, by which robots can safely and dexterously interact with humans. The e‐skin with a simple and cost‐effective sensory structure has multimodal perceptions of proximity, temperature, contact force, and contact position with broad measuring range, high sensitivity, and fast response. The e‐skin is applied onto robots to accomplish obstacle avoidance, safe and dexterous human–robot interaction, smart teaching, and playing Tai‐Chi, which demonstrate a broad range of applications for intelligent robots equipped with e‐skins.
Keywords: electronic skins, human–robot interactions, intelligent robotic control, multimodal perceptions, piezo‐thermic sensors
A multifunctional electronic skin (e‐skin) is proposed, enabling robots to safely and dexterously interact with human. The e‐skin with a compact hierarchical structure detects proximity, wide‐range touch force, contact position, and temperature simultaneously and independently. Robots equipped with the e‐skin combining intelligent control accomplish versatile tasks of obstacle avoidance, safe interaction, being taught by human, and playing Tai‐Chi with human.

1. Introduction
Human–robot interactivity and safety are essential for applications of robots in our lives,[ 1 , 2 ] such as manufacturing,[ 3 ] healthcare,[ 4 ] and domestic services.[ 5 ] These applications require robots having an ability of intelligent human–robot interaction, a function of safety control to avoid injuries to people or objects that are caused by accidental collisions[ 1 , 6 , 7 , 8 ] and further fulfill fine cooperation with humans. Endowing robots with the sentience of proximity and contact is the foundation for reducing collision risk and intentionally interacting with humans.[ 1 ]
There are two main methods to provide a robot with an ability of haptic perception, one is using proprioceptive sensors of the robot, and the other is using additional exteroceptive sensors mounted on the robot.[ 9 ] The method of using proprioceptors is to monitor the robot actuator torque or current fluctuation and utilize the dynamic model of the robot to realize perceiving touch or collision.[ 9 , 10 ] However, an accurate robotic dynamic model is difficult to obtain, and the model‐based method is generally complicated and susceptible to disturbance that degrades its sensing performance.[ 9 , 10 , 11 ]
Using additional exteroceptive sensors or artificial skins mounted on robots is an alternative model‐free method. Deploying force/torque sensors at joints or end‐effectors of robots is a general approach enabling robots to perceive force. However, installing force/torque sensors at robotic joints degrades the mechanical strength of the robotic arms, and thus reduces the load capacity of the robot. In contrast, artificial skin endows a robot with sensory ability without changing its mechanical structure.[ 12 ] In recent years, many tactile sensors and artificial skins based on various transduction principles have been proposed, such as capacitive,[ 13 , 14 , 15 , 16 ] piezoresistive,[ 11 , 17 , 18 , 19 , 20 , 21 ] thermosensitive,[ 22 , 23 , 24 ] optic,[ 25 , 26 , 27 , 28 ] magnetic,[ 29 , 30 ] pneumatic,[ 31 ] and triboelectric.[ 32 , 33 ] However, existing tactile sensors and electronic skins (e‐skins) used for robots suffer from the limitations of performance, complexity, and cost. For example, collision or contact forces onto robots are sometimes tiny (less than N, e.g., when a robot touches a fragile object, detecting tiny force is required so as not to crush the object[ 13 ]) and sometimes large (up to about hundreds of N, e.g., when a robot collides accidentally with an object during operation[ 34 , 35 ]), which requires the artificial skin to have high sensitivity and wide measurement range. On the other hand, to cover on a robot, most artificial skins adopt sensor array or multisensor module matrix.[ 14 , 19 , 20 , 21 , 22 , 36 , 37 , 38 , 39 ] Although these artificial skins provide robot large‐area sensing capability, a large number of sensing units leads to complex sensing array structure, complex input/output (I/O) wiring, expensive manufacturing process, complicated signal conditioning and calibration, decline in data readout rate, and large computation cost and power consumption, which obviously impedes practical applications of these artificial skins to robots.[ 40 , 41 , 42 ] In addition, easy to mount on robots and convenient maintenance are also crucial issues for robotic artificial skins.
Here, we report a multifunction e‐skin with a simple and cost‐effective sensory structure that is easy to mount on robots. Utilizing a small number of low‐cost piezo‐thermic sensors and an integrated parallel‐plate capacitor, the e‐skin is endowed with multimodal perceptions of contact force, contact position, proximity, and temperature as shown in Figure 1a. Contact force and position detections of the e‐skin are with wide measurement range, high sensitivity, and fast response, as well as immune from environment temperature variation. We also propose an intelligent robotic control strategy incorporated with multisensations of the e‐skin, which provides a robot unique ability to avoid obstacle, safely and dexterously interact with human, and be docilely taught by human. Movie S1, Supporting Information, summarizes the functions of the e‐skin and demonstrates human–robot interactions, where the robots equipped with the e‐skins nimbly elude unexpected collision to human, safely and dexterously interact with human, smartly learn skills, and play Tai‐Chi with human.
Figure 1.

Prototype and sensing principle of the e‐skin. a) The schematic structure of the multifunctional e‐skin. b) The relative capacitance response of the e‐skin to hand approaching. D is denoted as the approaching distance. The inset shows the schematic diagram of the proximity sensing principle. c) The experimental response of a piezo‐thermic sensor against the shift and tilt of the top plate. ΔH is denoted as the displacement, where a negative ΔH refers to noncontact gap distance, and a positive ΔH (i.e., compression depth) indicates that the piezo‐thermic material contacts the thermal sensor and is compressed. H 0 is the original height of the piezo‐thermic material (6 mm). d) Performance evaluation of the e‐skin for detecting contact force and contact position. d[i]) Measurement results of applied forces. The legend shows the XY coordinate where the force is applied. The top‐left inset shows the force errors. The bottom‐right inset shows a dynamic response of the e‐skin when putting a small weight (50 g) on the e‐skin. d[ii,iii]) Measurement results of the contact positions (X, Y). The insets show the position errors.
2. Results
The e‐skin has a simple hierarchical structure comprised of top and bottom layers as well as a middle sensing/supporting layer as illustrated in Figure 1a. The top layer is the force‐bearing plate where external forces act on. The bottom layer is the substrate of the e‐skin and also serves as a mounting plate for assembling with robots. To ensure sufficient bearing capacity, the top and bottom plates are made of steel or conductive carbon fiber. The middle sensing/supporting layer is composed of distributed piezo‐thermic sensors to detect force‐induced shift and tilt of the top plate, and distributed elastic polydimethylsiloxane (PDMS) cubes serving as elastic supporting elements for supporting the top plate. Each piezo‐thermic sensor consists of a flexible thermal sensor adhered on the bottom plate and a porous piezo‐thermic material adhered on the top plate. The thermal sensor comprises two micro platinum/chromium (Pt/Cr) ribbons fabricated on a flexible polyimide substrate. The central ribbon with a small resistance (≈60 Ω) called hot‐film (denoted as R h) is electrically heated and operated by a constant temperature difference (CTD) circuit[ 43 ] (Figure S1, Supporting Information), while the circumjacent ribbon with a large resistance (≈600 Ω) called cold‐film (denoted as R c) bears low Joule heating and has almost same temperature with ambient. The piezo‐thermic material is composed of a mixture of PDMS, air holes, and silver nanoparticles (Ag). To ensure sensing performance and reduce cost, we propose a structural layout with four elastic PDMS cubes distributed at four corners of the e‐skin, and four piezo‐thermic sensors distributed at the midpoints of four edges of the e‐skin.
The multifunctional sensing principle of the e‐skin is based on piezo‐thermic transduction to detect contact force and contact position, and utilizing capacitive sensing to detect human/object approaching, as well as utilizing thermistors to detect temperature. This hybrid sensing scheme of the e‐skin avoids cross‐interference among multimodal perceptions. The piezo‐thermic transducers enable highly sensitive force and position detection using simple and low‐cost thermometry,[ 44 ] which will be elaborated in a later section. The top and bottom plates of the e‐skin work as a parallel‐plate capacitor (denoted as C e) used for proximity awareness, which is based on the coupling effect of the external capacitor (denoted as C h) and the parallel‐plate capacitor C e.[ 45 , 46 ] Figure 1b illustrates the response of the e‐skin to our hand approaching. The cold‐films monolithically integrated on the piezo‐thermic sensors serve as the temperature sensors, which on the one hand detect the ambient temperature (Figure S2a, Supporting Information), and on the other hand automatically implement the temperature compensation for the piezo‐thermic sensors by the CTD scheme[ 47 ] (detail in Experimental Section). Therefore, the contact force and position detections of the e‐skin are immune from environment temperature variation (Figure S2b, Supporting Information).
Based on piezo‐thermic transduction,[ 44 ] the piezo‐thermic sensors detect the shift and title of the top plate by detecting the local conductive heat transfers from the hot‐films to their surrounding piezo‐thermic materials. When the e‐skin is subjected to an external force, the top plate bears the force and transmits the force to the distributed PDMS elastic supporting elements situated between the top and bottom plates. The distributed PDMS elements share the applied force and are elastically deformed, which makes the top plate to shift up and down as well as tilt correspondingly. The shift and tilt of the top plate are detected by the distributed piezo‐thermic sensors (Figure 1c). To ensure high sensitivity and broad measuring range, the piezo‐thermic sensor works in a contact‐separation mode. Specifically, when the porous piezo‐thermic material contacts the thermal sensor and is compressed by an external force, its effective thermal conductivity is increased due to the decrease of its air volume ratio, which in turn leads to a response of the hot‐film underneath the piezo‐thermic material. When the piezo‐thermic sensor works in the separation mode, the gap distance between the piezo‐thermic material and the thermal sensor can be sensitively detected by the hot‐film because of the piezo‐thermic transduction involving an air gap layer in series with a composite material. Figure 1c shows an experimental response of a piezo‐thermic sensor to the shift of the top plate, where a negative distance ΔH refers to noncontact gap distance and a positive distance (i.e., compression depth) indicates that the piezo‐thermic material contacts the thermal sensor and is compressed. The experimental result verifies the effectiveness of the piezo‐thermic sensor for the shift detection of the top plate. Fusing the outputs of distributed piezo‐thermic sensors enables the e‐skin to perceive the shift and tilt of the top plate that figures out the contact force and contact position. The responses of the distributed piezo‐thermic sensors under loading and unloading tests shown in Figure S3, Supporting Information, validate the effectiveness of the force/position sensing and indicate the low hysteresis of the e‐skin.
To determine the contact force F and contact position coordinate (X, Y) on the e‐skin from the outputs of the distributed piezo‐thermic sensors in real time, we need to establish a data‐fusion algorithm. Neural network as a machine learning method has been demonstrated to capably solve various data fusion problems.[ 48 ] To ensure high accuracy and fast response, we involve the dynamic mechanism of force/position sensing into modeling and adopt a fully connected neural network to establish the dynamic model of the e‐skin (Figure S4a, Supporting Information). The time‐series data of the distributed piezo‐thermic sensors are used as the inputs of the network. The contact force F and contact position coordinates (X, Y) are the outputs of the network. Through optimization (Figure S4b,c, Supporting Information), the optimized neural network is structured with three hidden layers, each layer has ten neurons. To train the network and evaluate the force sensing performance of the e‐skin, we conduct a series of experiments using a force gauge, where the contact force varying from 0 to 500 N is exerted at different positions on the e‐skin. The experiment datasets (containing about 350 000 sets) used in training the neural network are different from those (containing about 90 000 sets) in evaluating the network to ensure the effectiveness of the performance validation. Figure 1d illustrates the evaluation results of the e‐skin in detecting force and contact position. The test results show that the estimated force and the estimated contact position by the e‐skin are in good agreement with the actual values. The root mean square error (RMSE) of the estimated force exerted at different positions on the e‐skin (10 × 10 cm2) is less than 3.2 N in the wide range of 0‐500 N (0.4 N in the low‐force range of 0‐5 N), and the RMSEs of the estimated contact position reach 1.1 mm (X location) and 0.9 mm (Y location), respectively, in the range of 0‐100 mm. The low detection limit of the force sensing is tested by putting a small weight on the e‐skin. The result (Figure 1d–i bottom‐right inset) shows the force detection limit of the e‐skin is less than 0.5 N and the response time of the e‐skin reaches 0.09 s. Movie S2, Supporting Information, shows a dynamic process using the e‐skin to measure the contact force and contact position simultaneously and display the measurement results in real time. The repeatability and durability test on the e‐skin by exerting an alternating force between 0 and 50 N for 5000 cycles is also conducted and shown in Figure S5, Supporting Information, which indicates the good stability and durability of the e‐skin.
In summary, our proposed e‐skin with a compact hierarchical structure is competent to perceive proximity, temperature, contact force, and contact position with wide measuring ranges (0‐500 N, 100 cm2), high sensitivity (0.5 N), fast response (0.09 s), low hysteresis, and good reliability. Unlike traditional artificial skins utilizing a large number of sensing units that require complex array structure and complicated conditioning circuit, our e‐skin makes good use of only four distributed piezo‐thermic sensors to achieve fast and accurate detections of contact force and contact position on a large area, and thus takes merits of simple structure, low cost, and easy operation. The e‐skin can be further scaled up without additional sensor cost and its geometrical shape can be modified to fit various robot arms. One example of a customized e‐skin with a curved area of 139 cm2 mounted on a six‐degree‐of‐freedom (6‐DoF) robot arm is shown in Figure S6, Supporting Information. In addition, the e‐skin is easy to mount on a robot. Figure 2a shows a homemade robot arm equipped with our e‐skin.
Figure 2.

Applying the e‐skin to the robot for collision detection and safety control. a) The prototype of a robot arm equipped with an e‐skin. b) Schematic diagram of the robotic feedback control system for collision detection and safety control. c) Responses of the robot arm system when only using force detection in the safety control. d) Responses of the robot arm system when combining proximity and force detection in the safety control.
As mentioned above, collision detection and safety interactive control are significant for robots.[ 11 , 49 ] A smart robot should have an ability to detect obstacle and be aware of collision risk and makes fast response to minimize risk of injury. We utilize our e‐skin to execute robotic collision detection and safety control as shown in Figure 2b. The e‐skin detects human/object approaching and collision in real time and transmits the detected proximity signal and collision information (force and position) to the robot controller. When the detected signals exceed the predefined safety thresholds (|ΔC e/C e0| > 0.02 or F > 0.5 N), the robot triggers its safety control (deceleration or brake) in time to avoid collision injuries (Movies S3–S5, Supporting Information). Figure 2c illustrates the responses of robotic collision detection and safety control when using only collision force signal detected by the e‐skin. A human hand approaches the running robot arm (running at 85° s−1), then a collision happens. When the detected collision force exceeds the predefined thresholds (0.5 N), the robot triggers a braking response. The result shows that the peak collision force reaches about 15 N, and the robot arm brakes in about 0.13 s after the collision occurs. If we combine the proximity signal with the collision force in the safety control, the robot collision damage can be greatly reduced and the safety level is highly improved. Figure 2d illustrates the corresponding responses of the robot when combining the proximity signal with the force detection in the safety control policy. When the e‐skin detects an object approaching by the proximity signal, the robot actively decelerates promptly (down to a predefined velocity of 18° s−1), which greatly reduces the subsequent collision strength. The result shows that the peak collision force is reduced to about 6 N. The dynamic processes of collision detection and safety control of the robot utilizing collision force mode and proximity/force combination mode, respectively, are demonstrated in Movies S4 and S5, Supporting Information.
In addition to collision safety control, it is also necessary for robots to intentionally interact with human to fulfill human–robot collaboration. Safe and efficient human–robot interaction is important in human–robot coexistence circumstances. We expect to develop a harmonious relationship between human and robot, that is, both can work efficiently and independently in noninteractive mode. When an intentional interaction happens, the robot can actively comply with the human in a safe and dexterous way. To achieve this goal, we further propose an interactive control strategy that equates the robot arm as a virtual spring‐damping system to implement a safe and dexterous interaction with human as shown in Figure 3a. We hypothesize that the robot arm follows a spring‐damping kinetic mechanism (detail in Experimental Section) and propose the following interactive control law:
| (1) |
where Δθ is the angle increment of the robot, K k and K d are the coefficients related to the stiffness and damping of the kinetic system respectively, e τ = τ − τ des is torque deviation, τ is the interaction torque on the robot detected by the e‐skin, τ des is a desired acting torque, and is the robot angular velocity. By using this control strategy, a human can effortlessly lead a robot arm to move forward and backward via touching on the e‐skin.
Figure 3.

Applying the e‐skin to the robot for human–robot interaction (HRI). a) The virtual spring‐damping system of the robot arm.τis the interaction torque on the robot, τ des is a desired acting torque, k and μ are the stiffness coefficient and damping coefficient of the virtual system respectively. b) Flowchart of the control system for human–robot interaction. c) Responses of the robot arm system in the human–robot interaction and a demonstration of teaching the robot to type Morse code.
When a human hand touches on the robot arm, the interaction torque is detected by the e‐skin. And then the angle increment of the robot is determined by using Equation (1) and transmitted to the PID controller of the robot to fulfill the subsequence movement of the robot in real time (Figure 3b). Figure 3c shows the results of the human–robot interaction and a demonstration of human teaching the robot to type Morse code. When a human hand approaches the running robot arm, the robot perceives the hand timely and brakes promptly for safe entry into the interactive mode. When the human hand touches on the robot arm, the robot detects human contact and triggers human–robot interaction mode immediately. That is, the robot is controlled to keep in step with the human hand. Specifically, the hand leads the robot to move forward and backward intentionally via the e‐skin. In interactive mode, the motion of the robot arm can be well controlled by using the proposed interactive control strategy. In other words, the human can lead the robot to perform arbitrary movements by the aid of the e‐skin. As a demonstration, we teach the robot to type the Morse code of “THU,” where the motion path of the robot arm represents the symbol of Morse code. In virtue of high sensitivity and fast response of the e‐skin as well as robust interactive control, the movement of the robot is smooth and the teaching is effortless. After teaching, the robot independently and accurately replays typing Morse code of “THU” in real time as shown in Figure 3c. The dynamic process of human–robot interaction and teaching/playback is demonstrated in Movie S6, Supporting Information.
Furthermore, we apply an e‐skin (area of 139 cm2 with a curved profile customized to fit a 6‐DOF robot arm) to a commercial 6‐DoF robot arm (detail in Experimental Section), as shown in Figure 4 and Figure S6a, Supporting Information. The e‐skin is mounted on the end‐effector of the robot. By using the proximity detection of the e‐skin, the 6‐DoF robot arm accomplishes obstacle avoidance, that is, timely modifies the moving trajectory of the robot, to ensure safety in case a human is close to the robot, the results are shown in Figure 4a and Movie S7, Supporting Information. In addition, the 6‐DoF robot arm can be led to move flexibly by our hand touching on the e‐skin. By this way, we can play Tai‐Chi with the robot (Movie S8, Supporting Information) or teach the robot to handwrite words (Movie S9, Supporting Information). As shown in Figure 4b, human hand teaches the robot to handwrite the characters “CHN.” After teaching, the robot can accurately and independently replay handwriting “CHN” in real time. Compared to programming‐based path planning or image‐based robotic teaching,[ 50 ] hand‐by‐hand teaching enables simplifying operation processes, enhancing efficiency, and flexibility of robotic manipulation, and thus facilitating human–robot cooperation.
Figure 4.

Applying e‐skin on a multi‐degree‐of‐freedom robot arm to implement human–robot interaction and teaching. a) A 6‐DoF robot arm equipped with our e‐skin (Left). Motion trajectory of the 6‐DoF robot arm to implement obstacle avoidance (Right). b) Human teaches the robot to write the word “CHN” by using hand touching on the e‐skin, and the robot accurately and independently replays writing “CHN” in real time.
Above demonstrations show that safe and dexterous human–robot interactions are well accomplished by equipping our multifunctional e‐skin on robots. Using this cost‐effective e‐skin, even traditional industrial robots can be easily turned into smart collaborative robots that are able to safely and dexterously interact with humans and be docilely taught by humans. It is greatly meaningful to deploy or modify robots for intelligent manufacturing by a simple and low‐cost way.
3. Conclusion
In conclusion, we propose a cost‐effective multifunctional e‐skin enabling multimodal perceptions of contact force, contact position, temperature, and proximity. Utilizing only four low‐cost piezo‐thermic sensors, the e‐skin is capable of perceiving contact force and contact position, taking merits of broad measuring ranges (0‐500 N, >100 cm2), high sensitivity (0.5 N, 1 mm), fast response (0.09 s), low hysteresis, and good reliability. Moreover, the monolithically integrated cold‐films on the piezo‐thermic sensors eliminate the environmental temperature effects on the force/position detections of the e‐skin and also provide the temperature sensing capability for the e‐skin. Proximity detection is also integrated in the e‐skin via the parallel‐plate capacitor to provide obstacle avoidance ability for robot. The e‐skin is easy to assemble with robots, can be modified to fit various robot arms, and scaled up without additional sensor cost. We also propose an intelligent robotic control strategy incorporated with multisensations of the e‐skin to realize robotic collision/obstacle avoidance, safe and dexterous human–robot interaction, hand‐by‐hand teaching, and playing with human. Our demonstrations show that the multifunctional e‐skin facilitates human–robot collaboration and would expedite robotic applications in intelligent manufacturing, medicine, and home service.
4. Experimental Section
Fabrication of the Electronic Skin
To ensure sufficient bearing capacity, the top and bottom plates (3 mm thick) were made of steel or conductive carbon fiber.
The thermal sensor was fabricated on a polyimide substrate (AP8525R, DuPont Co. Ltd.). First, a 30 µm photoresist was sprayed onto the substrate and patterned by using photolithography, then the Cr/Pt (35 nm/140 nm) films were sequentially deposited by sputtering, and patterned by a lift‐off process. Finally, a 4 µm parylene film was coated on the thermal sensor as a protective layer.
The piezo‐thermic material was fabricated by mixing silver nanoparticles (diameter < 100 nm, S110970, Aladdin Co. Ltd.), prepared PDMS solution (Sylgard 184, Dow Corning Company, the ratio of base agent: cross‐linker was 10:1 wt%), and citric acid monohydrate particles (CAM). The volume ratio of silver nanoparticles was 2.5 vol%. The mass ratio of PDMS:CAM was 1:3.5. The mixture was cured at 75 °C for 2 h in PMMA cylindrical mold (diameter 10 mm, height 6 mm) and immersed in ethanol for 24 h to dissolve the CAM to form the porous material. The fabricated piezo‐thermic material had the same size with the PMMA mold.
The PDMS cube was composed of two PDMS layers with different cross‐linker ratios. The first PDMS layer was cured with a cross‐linker ratio of 25:1 wt% in the mold (30 × 30 × 3 mm3) at 75 °C for 2 h, then another same mold was superimposed to cure the second PDMS layer with a cross‐linker ratio of 28:1 wt%.
Finally, the authors glued the PDMS cubes and the thermal sensors to the bottom plate, and paired the thermal sensors with piezo‐thermic materials as shown in Figure 1a; then the top plate was glued with piezo‐thermic materials and the PDMS cubes. Each thermal sensor pairing with a functional piezo‐thermic material constituted a piezo‐thermic sensor.
Signal Conditioning of the Electronic Skin
As shown in Figure S1, Supporting Information, the capacitance of the e‐skin was detected using a commercial capacitance digital converter (CDC) (AD7150, Analog Device Co. Ltd.). The CTD circuits were used for signal conditioning of the thermal sensors of distributed piezo‐thermic sensors on the e‐skin. The two Cr/Pt films of the thermal sensor were connected to the Wheatstone bridge. The CTD of the hot‐film of a thermal sensor was achieved by the feedback circuit. All signals of the e‐skin were sampled and collected using a microcontroller unit (MCU) (STM32L476, STMicroelectronics). The signal sampling cycle of the e‐skin was 5 ms.
Device Characterization and Experiment Measurement
Proximity detection was conducted by fixing the object or hand on a mechanized z‐axis stage (Handpi HLD) embedded with a displacement ruler. Temperature compensation and responses of the e‐skin were tested in a temperature‐controlled oven (OGH60, Thermo Fisher Co. Ltd.). Force and contact position measurements were conducted by applying force at different positions on the e‐skin using a mechanized z‐axis stage with a force gauge (Handpi HP‐1K).
The force sensitivity and measuring range of the e‐skin highly depended on the compositions of the PDMS cubes (supporting elements) and the piezo‐thermic material. Therefore the compositions of the PDMS cubes and the piezo‐thermic materials were optimized. Figure S7a, Supporting Information, was the Young's moduli of the PDMS cubes with different cross‐linker ratios, which presented that the Young's modulus of the PDMS cube could be regulated by adjusting the cross‐linker ratio. The PDMS cube composed of two PDMS layers with Young's moduli of 163 kPa (cross‐linker ratio 28:1 wt%) and 407 kPa (cross‐linker ratio 25:1 wt%), respectively, was adopted to meet both requirements on high sensitivity and broad measurement range, and the thickness of each PDMS layer was 3 mm. Furthermore, the authors optimized the composition of the piezo‐thermic materials of the e‐skin. The experimental result (Figure S7b, Supporting Information) illustrated that properly increasing the porosity and the ratio of Ag could improve the measuring range of the piezo‐thermic sensor under large force. Therefore, the porosity and the ratio of Ag were optimized to be 70 and 2.5 vol%, respectively.
To calibrate the force and position sensing of the e‐skin, a force ranging from 0 to 500 N was applied at 13 different positions of the e‐skin using a force gauge and record the corresponding signals of the e‐skin. About 60 different forces were applied at each position. The authors collected about 350 000 sets of experimental dataset to train the neural network by using MATLAB and collected another dataset containing about 90 000 sets to evaluate the sensing performance. Both the number of hidden layers and neurons were determined through optimization (Figure S4, Supporting Information). The training learning rate was set to 0.01, and the sigmoid function was used as the activation function of the neurons. The optimized network contains three hidden layers and each hidden layer had ten neurons.
An e‐skin was mounted on a homemade robot arm composed of a robot joint (RGM14, Kollmorgen Company). The multisensory signals of the e‐skin were sampled and transmitted to a PC via a cluster communication port (COM). The force and contact position calculated by the neural network were displayed on a computer interface programmed by LabVIEW, at the same time were used to the customized compliant control algorithm programmed by using TwinCAT3. The control cycle of the robot was 5 ms.
Another e‐skin was mounted on a commercial 6‐DoF robot arm (EC66, ELITE Co. Ltd). The signals of the e‐skin were sampled and transmitted to a PC via COM, and then were processed to generate control commands by using the customized compliant control algorithm programmed in Python (Python 3.8.0). Then the commands were transmitted to the commercial robot controller‐box via local area network. The control cycle of the 6‐DoF robot arm was 10 ms.
Proximity Detection Principle of the Electronic Skin
As shown in Figure 1b, when an object or hand approached the e‐skin, the fringe electric field of the e‐skin was intercepted and shunted to the ground, which caused the capacitance of the e‐skin to decrease. The relationship between the capacitance of the e‐skin and the proximity distance could be expressed as:[ 45 ]
| (2) |
where C e was the capacitance of the e‐skin, L and W referred to the geometry length and width of the external object or hand, respectively,εwas a constant related to the electrical properties of the object or hand approaching, D was the distance between the e‐skin and the object or hand approaching. The model indicated that the e‐skin had the capability to perceive proximity. It was noted that the proximity response also depended on the electrical properties of the approaching object[ 2 ] (Figure S8, Supporting Information).
Temperature Measurement and Compensation Principle of the Thermal Sensor on the Electronic Skin
As shown in Figure S1, Supporting Information, the hot‐film and cold‐film of a thermal sensor were connected into the Wheatstone Bridge of the CTD circuit. Because the resistance of the cold‐film (≈600 Ω) was much larger than that of the hot‐film (≈60 Ω), the heating power of the cold‐film could be ignored. The cold‐film temperature and the ambient temperature were almost equal, which could be expressed as:
| (3) |
where R c0 was the resistance of cold‐film at 0 °C, α c was the temperature coefficient resistance (TCR) of cold‐films, and T was the ambient temperature. According to the CTD circuit, the resistance of cold‐film could also be expressed as:
| (4) |
Combining the Equations (3) and (4), the temperature T could be figured out as following:
| (5) |
where was denoted as λ.
Furthermore, when the Wheatstone bridge reached a balanced state by the feedback circuit, the resistance relationship of the CTD circuit could be obtained:
| (6) |
where ΔT was the temperature difference between the hot‐film and the ambient, R h0 was the resistance of hot‐film at 0 °C, andα was denoted uniformly as the TCRs of the cold‐film and hot‐film as they were approximately equal. Moreover, the resistance ratio of the bridge was set as:[ 43 , 47 ]
| (7) |
Substituting Equations (6) into (7), ΔT could be expressed as:
| (8) |
which indicated that the temperature difference between the hot‐film and the ambient was independent of the ambient temperature. The temperature differences of all thermal sensors were set as 20 °C.
The Compliance Control Of Human–Robot Interaction for the Robot Equipped with Electronic Skin
As shown in Figure 3a, the authors equated the robot arm as a virtual spring‐damping system. When the human hand lightly touched the robot arm, the total torque of the kinematic system could be expressed as:
| (9) |
where τ total was the total torque, τ was the interaction torque on the robot, τ des was a set desired acting torque, was the angular velocity of the robot arm, Δθ was the angle increment of the robot arm, k and μ were the stiffness coefficient and damping coefficient of the spring‐damping system, respectively. Under the quasi‐static state, the total torque was ≈0, so the angle increment of the robot arm could be determined as:
| (10) |
where e τ = τ − τ des was torque deviation, , and . The coefficients of τ des, K k, and K d were set as 0.25, 3.5, and 0.012, respectively, through optimization trial to achieve compliant movements of the robot following human hand, including pushing forward and stepping backward maneuvers.
Conflict of Interest
The authors declare no conflict of interest.
Supporting information
Supporting Information
Supplemental Movie 1
Supplemental Movie 2
Supplemental Movie 3
Supplemental Movie 4
Supplemental Movie 5
Supplemental Movie 6
Supplemental Movie 7
Supplemental Movie 8
Supplemental Movie 9
Acknowledgements
G.Z.L. and S.Q.L. contributed equally to this work. The authors would like to thank Songlin Cai for the helpful advice on the 6‐DoF robot. This work was supported by National Key Research and Development Program of China (Grant No. 2019YFB1310204), National Natural Science Foundation of China (Grant No. 51735007). and Beijing Natural Science Foundation (Grant No. 3191001).
Li G., Liu S., Mao Q., Zhu R., Multifunctional Electronic Skins Enable Robots to Safely and Dexterously Interact with Human. Adv. Sci. 2022, 9, 2104969. 10.1002/advs.202104969
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
- 1. Cirillo A., Ficuciello F., Natale C., Pirozzi S., Villani L., IEEE Rob. Autom. Lett. 2016, 1, 41. [Google Scholar]
- 2. Tsuji S., Kohama T., IEEE Sens. J. 2020, 20, 15075. [Google Scholar]
- 3. Cencen A., Verlinden J. C., Geraedts J. M. P., IEEE/ASME Trans. Mechatron. 2018, 23, 1092. [Google Scholar]
- 4. Erickson Z., Collier M., Kapusta A., Kemp C. C., IEEE Rob. Autom. Lett. 2018, 3, 2245. [Google Scholar]
- 5. Fischinger D., Einramhof P., Papoutsakis K., Wohlkinger W., Mayer P., Panek P., Hofmann S., Koertner T., Weiss A., Argyros A., Vincze M., Rob. Auton. Syst. 2016, 75, 60. [Google Scholar]
- 6. Li W., Han Y., Wu J., Xiong Z., IEEE/ASME Trans. Mechatron. 2020, 25, 2565. [Google Scholar]
- 7. Althoff M., Giusti A., Liu S. B., Pereira A., Sci. Rob. 2019, 4, eaaw1924. [DOI] [PubMed] [Google Scholar]
- 8. Robla‐Gómez S., Becerra V. M., Llata J. R., González‐Sarabia E., Torre‐Ferrero C., Pérez‐Oria J., IEEE Access 2017, 5, 26754. [Google Scholar]
- 9. Heo Y. J., Kim D., Lee W., Kim H., Park J., Chung W. K., IEEE Rob. Autom. Lett. 2019, 4, 740. [Google Scholar]
- 10. Haddadin S., Luca A. D., Albu‐Schäffer A., IEEE Trans. Rob. 2017, 33, 1292. [Google Scholar]
- 11. Pang G., Yang G., Heng W., Ye Z., Huang X., Yang H.‐Y., Pang Z., IEEE Trans. Ind. Electron. 2021, 68, 3303. [Google Scholar]
- 12. Liang J., Wu J., Huang H., Xu W., Li B., Xi F., IEEE Sens. J. 2020, 20, 3822. [Google Scholar]
- 13. Boutry C. M., Negre M., Jorda M., Vardoulis O., Chortos A., Khatib O., Bao Z. N., Sci. Rob. 2018, 3, eaau6914. [DOI] [PubMed] [Google Scholar]
- 14. Bai N., Wang L., Wang Q., Deng J., Wang Y., Lu P., Huang J., Li G., Zhang Y., Yang J., Xie K., Zhao X., Guo C. F., Nat. Commun. 2020, 11, 209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Núñez C. G., Navaraj W. T., Polat E. O., Dahiya R., Adv. Funct. Mater. 2017, 27, 1606287. [Google Scholar]
- 16. Phan S., Quek Z. F., Shah P., Shin D., Ahmed Z., Khatib O., Cutkosky M., presented at 2011 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , San Francisco, September and 2011. [Google Scholar]
- 17. Gao Y., Yan C., Huang H., Yang T., Tian G., Xiong D., Chen N., Chu X., Zhong S., Deng W., Fang Y., Yang W., Adv. Funct. Mater. 2020, 30, 1909603. [Google Scholar]
- 18. Mousavi S., Howard D., Zhang F., Leng J., Wang C. H., ACS Appl. Mater. Interfaces 2020, 12, 15631. [DOI] [PubMed] [Google Scholar]
- 19. Kim J., Lee M., Shim H. J., Ghaffari R., Cho H. R., Son D., Jung Y. H., Soh M., Choi C., Jung S., Chu K., Jeon D., Lee S. T., Kim J. H., Choi S. H., Hyeon T., Kim D. H., Nat. Commun. 2014, 5, 5747. [DOI] [PubMed] [Google Scholar]
- 20. Hua Q. L., Sun J. L., Liu H. T., Bao R. R., Yu R. M., Zhai J. Y., Pan C. F., Wang Z. L., Nat. Commun. 2018, 9, 244. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Sun X., Sun J., Li T., Zheng S., Wang C., Tan W., Zhang J., Liu C., Ma T., Qi Z., Liu C., Xue N., Nano‐Micro Lett. 2019, 11, 57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Wu X., Zhu J., Evans J. W., Lu C., Arias A. C., Adv. Funct. Mater. 2021, 31, 2010824. [Google Scholar]
- 23. Zhao S., Zhu R., Adv. Mater. 2017, 29, 1606151. [DOI] [PubMed] [Google Scholar]
- 24. Li G., Liu S., Wang L., Zhu R., Sci. Rob. 2020, 5, eabc8134. [DOI] [PubMed] [Google Scholar]
- 25. Zhao H., O'Brien K., Li S., Shepherd R. F., Sci. Rob. 2016, 1, eaai7529. [Google Scholar]
- 26. Hughes D., Lammie J., Correll N., IEEE Rob. Autom. Lett. 2018, 3, 1386. [Google Scholar]
- 27. Ward‐Cherrier B., Pestell N., Cramphorn L., Winstone B., Giannaccini M. E., Rossiter J., Lepora N. F., Soft Rob. 2018, 5, 216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Yuan W., Dong S., Adelson E. H., Sensors 2017, 17, 2762. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Wu Y. Z., Liu Y. W., Zhou Y. L., Man Q. K., Hu C., Asghar W., Li F. L., Yu Z., Shang J., Liu G., Liao M. Y., Li R. W., Sci. Rob. 2018, 3, eaat0429. [Google Scholar]
- 30. Yan Y., Hu Z., Yang Z., Yuan W., Song C., Pan J., Shen Y., Sci. Rob. 2021, 6, eabc8801. [DOI] [PubMed] [Google Scholar]
- 31. Ohta P., Valle L., King J., Low K., Yi J., Atkeson C. G., Park Y.‐L., Soft Rob. 2017, 5, 204. [DOI] [PubMed] [Google Scholar]
- 32. Jin T., Sun Z., Li L., Zhang Q., Zhu M., Zhang Z., Yuan G., Chen T., Tian Y., Hou X., Lee C., Nat. Commun. 2020, 11, 5381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Yuan Z., Shen G., Pan C., Wang Z. L., Nano Energy 2020, 73, 104764. [Google Scholar]
- 34. Rosenstrauch M. J., Krüger J., presented at 2017 3rd Int. Conf. on Control, Automation and Robotics (ICCAR) , Nagoya, Japan, April and 2017. [Google Scholar]
- 35. International Organization for Standardization, ISO, Geneva, Switzerland: 2016. [Google Scholar]
- 36. You I., Mackanic D. G., Matsuhisa N., Kang J., Kwon J., Beker L., Mun J., Suh W., Kim T. Y., Tok J. B.‐H., Bao Z., Jeong U., Science 2020, 370, 961. [DOI] [PubMed] [Google Scholar]
- 37. Guan Y.‐S., Thukral A., Zhang S., Sim K., Wang X., Zhang Y., Ershad F., Rao Z., Pan F., Wang P., Xiao J., Yu C., Sci. Adv. 2020, 6, eabb3656. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Sundaram S., Kellnhofer P., Li Y., Zhu J.‐Y., Torralba A., Matusik W., Nature 2019, 569, 698. [DOI] [PubMed] [Google Scholar]
- 39. Cheng G., Dean‐Leon E., Bergner F., Olvera J. R. G., Leboutet Q., Mittendorfer P., Proc. IEEE 2019, 107, 2034. [Google Scholar]
- 40. Dahiya R. S., Mittendorfer P., Valle M., Cheng G., Lumelsky V. J., IEEE Sens. J. 2013, 13, 4121. [Google Scholar]
- 41. Lee W. W., Tan Y. J., Yao H., Li S., See H. H., Hon M., Ng K. A., Xiong B., Ho J. S., Tee B. C. K., Sci. Rob. 2019, 4, eaax2198. [DOI] [PubMed] [Google Scholar]
- 42. Lee W. W., Kukreja S. L., Thakor N. V. presented at 2015 IEEE Biomedical Circuits and Systems Conf. (BioCAS), Atlanta, GA, USA, October and 2015.
- 43. Zhao S., Zhu R., Adv. Mater. Technol. 2018, 3, 1800056. [Google Scholar]
- 44. Zhao S., Zhu R., Fu Y., ACS Appl. Mater. Interfaces 2019, 11, 4588. [DOI] [PubMed] [Google Scholar]
- 45. Guan F., Xie Y., Wu H., Meng Y., Shi Y., Gao M., Zhang Z., Chen S., Chen Y., Wang H., Pei Q., ACS Nano 2020, 14, 15428. [DOI] [PubMed] [Google Scholar]
- 46. Zhang C., Liu S., Huang X., Guo W., Li Y., Wu H., Nano Energy 2019, 62, 164. [Google Scholar]
- 47. Que R. Y., Zhu R., Wei Q. Z., Cao Z., Meas. Sci. Technol. 2011, 22, 085404. [Google Scholar]
- 48. Lau C., Neural Networks: Theoretical Foundations and Analysis, IEEE Press, Piscataway, NJ: 1991. [Google Scholar]
- 49. Flacco F., Kröger T., Luca A. D., Khatib O., presented at 2012 IEEE Int. Conf. on Robotics and Automation , Saint Paul, MN, USA, May and 2012. [Google Scholar]
- 50. Kotani A., Tellex S. presented at 2019 Int. Conf. on Robotics and Automation (ICRA), Montreal, May and 2019.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information
Supplemental Movie 1
Supplemental Movie 2
Supplemental Movie 3
Supplemental Movie 4
Supplemental Movie 5
Supplemental Movie 6
Supplemental Movie 7
Supplemental Movie 8
Supplemental Movie 9
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
