Abstract
Force myography (FMG) is an appealing alternative to traditional electromyography in biomedical applications, mainly due to its simpler signal pattern and immunity to electrical interference. Most FMG sensors, however, send data to a computer for further processing, which reduces the user mobility and, thus, the chances for practical application. In this sense, this work proposes to remodel a typical optical fiber FMG sensor with smaller portable components. Moreover, all data acquisition and processing routines were migrated to a Raspberry Pi 3 Model B microprocessor, ensuring the comfort of use and portability. The sensor was successfully demonstrated for 2 input channels and 9 postures classification with an average precision and accuracy of ~99.5% and ~99.8%, respectively, using a feedforward artificial neural network of 2 hidden layers and a competitive output layer.
Keywords: Force myography, optical fiber sensor, user interface, gesture recognition, integrated sensor
Introduction
The hand posture detection is a widely investigated challenge and finds application in several research fields, such as assistive technology, prostheses control, user interface, game industry, and so forth.1-4 The earliest attempts to monitor human hand came as glove-like sensors, wherein sensing elements sewn on a wearable garment and worn by users calculate the rotation between finger bones.5-7 The arising development of processing capability and memory space on modern computers, on its turn, led to another well-established approach, namely, the computer vision–based optical tracking. In this case, single or multiple external cameras track the user hand, eventually using markers, and the posture information is recovered from the acquired images.8-10
These approaches apart, myographic sensors are popular among medical applications (eg, rehabilitation and prostheses control) as they recover posture information from muscle activity, thus can be used to identify not only the posture itself but also the motion volition.11-14 These sensors can be further subdivided into different families according to the principle of operation. For instance, noninvasive surface electromyography (sEMG) sensors track the electrical signal required to excite the muscles to reproduce given posture, and, although they have already been demonstrated for prosthesis control, it is well known that sEMG sensors are susceptible to muscular fatigue, sweating, skin fat, electromagnetic interference, electrode displacement, and do not offer an intuitive control.15-17 Then, there is optical myography (OMG), which explores computer vision techniques to monitor forearm deformation due to the muscular activities rather than the hand itself. This kind of sensor is highlighted mainly by its feasibility and low cost; nevertheless, it suffers from the lighting conditions and greatly reduces user mobility, because the forearm must remain inside the camera field-of-view.18-20 Besides electrical signals and forearm visual deformation, it is also possible to monitor muscular activity by measuring the forearm cross-section deformation via pressure or force sensors, which characterizes an approach known as force myography (FMG). As this technique relies solely on mechanical principles, it is expected to be immune to sweating and skin fat, unlike sEMG, and to provide acceptable mobility to the user as there is no need to lock the forearm position in the space.21,22
Most of FMG sensors are characterized by an array of force-sensing resistors (FSR) distributed around the forearm by means of a strap, bracelet, or orthosis.23,24 Though this setup has an appealing lightweight and low cost of production, FSR are susceptible to the external electromagnetic field, which is problematic depending on the application. Therefore, alternatives have been proposed and successfully demonstrated by substituting the resistive sensors with optical fiber force sensors. Among the advantages of such an alternative, one can list the lightweight, the low cost, the bandwidth of operation, and the immunity to the electromagnetic interference, so the end FMG sensor would not suffer from the acquisition and processing circuit.25 Although optical fiber–based FMG systems have been successfully demonstrated to identify various postures before, the proposed sensor designs usually had limited portability as all processing was performed by a computer, which reduces drastically real-life applications of said sensors.26,27
It is, thus, proposed in this work the development and demonstration of a portable FMG sensor by migrating all acquisition and processing routines from a computer to a microprocessor. Moreover, the traditional components of an optical fiber sensor, such as the light-emitting source and photodetectors, have been carefully chosen to minimize the system overall size, improving the sensor portability and keeping the cost of production.
Materials and Methods
Sensor hardware
The sensor hardware is shown in Figure 1, with all its components: light-emitting sources, light detectors, data acquisition, and processing unit, and a mechanism to correlate muscle activities to the light intensity guided by the optical fiber. The sensor was designed for 2-channel operation, each has its own light emitter-detector pair on 1 of the circuit boards. For the sake of visualization, however, only 1 of the channels is connected in Figure 1, as another one would share identical setting.
Figure 1.
Designed sensor system: (A) Photo and (B) schematics. For the sake of simplicity, only channel 2 was connected to an optical fiber and corresponding optomechanical transducer. Channel 1 would be connected likewise. LED indicates light-emitting diode; MMF, multimode fiber.
In this project, the LED (light-emitting diode) model HFBR-1414T (80 µW, 820 nm) and the matching photodetector HFBR-2416T (7 mV/µW, 820 nm) of Agilent HFBR-0400 series were chosen due to their small size, lightweight, and high efficiency, which preserve both sensor reliability and user mobility. The LED emitter is linked by a silica multimode optical fiber (MMF, 62.5/125 core/cladding diameters) to the corresponding photodetector, which converts light intensity into an analog voltage. The combination of LED and MMF provides a wider bandwidth of operation and also eliminates the characteristic modal noise of MMF, so the sensor is robust against external disturbances from the natural movements of the body. If the sensor should be sensitive to these movements, however, macrobending losses caused by fiber curvature can be avoided by encapsulating the bare waveguides inside protective layers as in commercial patch cables.28,29 The voltage signal from the photodetector is duly boosted by a subsequent amplifier circuit, and high-frequency components are properly removed by a low-pass filter before it is converted into the digital domain. The signal is now ready for the acquisition and processing unit (Raspberry Pi 3 Model B microprocessor with Linux operating system, comprised of Broadcom Quad Core Bcm2837 chipset with 64 bits, 1.2 GHz clock, 1 Gb RAM, wireless LAN, and Bluetooth modules). As can be observed from Figure 1, no additional power source is required from the acquisition and amplifier circuits as they are connected to the microprocessor. Hence, the whole setup can be connected to a power plug or a small portable power bank as the Raspberry Pi 3 is supplied by a micro USB of 5 V and 2.5 A.
An MMF guides light from the LED to its corresponding receiver, and external disturbances cause optical loss along the fiber. In this sense, a device was specially developed to convert the forearm radial contraction into light loss information. The optomechanical transducer, shown in Figure 2, was designed in Inventor® software, printed using butadiene styrene (ABS) material, and is composed of 2 halves. The upper half has 1 corrugated inner side, with 6 evenly spaced triangular bumps and 2 holes near the edges, and an outer side, composed of 3 identical hollow loci, wherein 1 or multiple Velcro straps go through to hold the whole transducer around the user arm. The bottom half also has a corrugated inner side, with 5 evenly spaced triangular bumps, and a plain outer side that touches the user skin. It can be seen that the corrugated side of the bottom half is elevated on the edges, forming a U-shape with protuberances that fit the holes in the inner side of the upper half so that the transducer can be easily (dis)assembled. The triangular bumps of both halves measure are disposed as to intercalate each other with a distance of 0.5 cm in between when the transducer is assembled, and are responsible to convey the pressure over the transducer to the MMF running midst the plates, causing light loss by microbending.28 Finally, to avoid undesirable motions of the optical fiber, such as transversal sliding, 2 small openings on the protuberances of the bottom half lock the optical fiber parallel to the transducer main axis, keeping it at the most sensitive area during operation.
Figure 2.

Optomechanical transducer with upper and bottom halves aligned as it would be assembled: (A) Top view and (B) side view.
Sensor software
Raspberry Pi 3 microprocessor captures the signals from the sensor with an acquisition rate of 100 Hz, preprocesses them, and then recovers the hand posture information. The sampling rate was set at 100 Hz as it should not be expected much posture variation in an interval of 0.1 seconds as the idea is not to process information related to dither or spasms, but the average force level produced by forearm muscles in the stationary phase, so higher values for the sampling rate should rather increase the processing flow instead of resolution. The first preprocessing stage is characterized by a fourth-order digital Butterworth low-pass filter with the cutoff frequency set at 200 Hz. This value was found empirically as the one that removed better eventual noises from the original signal while keeping the low-frequency components resulting from muscular activities. Subsequently, the filtered signal is either submitted to a sliding window of 5-second length for sensor calibration or normalized for posture identification.
To recover the posture information, an artificial neural network (ANN) was implemented in Python programming using Keras API with TensorFlow backend.30
A feedforward model with 2 input nodes, 9 output nodes, and 2 fully connected hidden layers (Figure 3) was selected for posture classification. The input layer is fed with data from both channels of the sensor, and the first hidden layer has 20 fully connected neurons that, by their turn, are fully connected to all 50 neurons from the second hidden layer. The latter is connected to the competitive output layer via sigmoid function, which normalizes all values as the likelihood of the input signal belonging to each of the 9 output classes, and the classifier returns the class with the highest probability. The hidden layers are activated by Rectified Linear Unit (ReLU) function, which showed better performance over other activation functions provided by Keras API for ANN models during the initial tests.
Figure 3.

Classifier ANN model. I1, I2 indicates inputs from channels 1 and 2, respectively; P: identified posture, corresponds to the one with the highest probability among all possible postures (A-H plus N); ANN: artificial neural network.
The classifier was calibrated via supervised learning, ie, the model parameters are evaluated during the training session as a means of the classification error between the estimated and the true class. For better training efficiency, the Adam optimization error backpropagation algorithm was adopted. A calibration set of ~9600 samples was collected from every current user, and these samples were randomly split into training and validation sets with a ratio of 0.9/0.1: while the former was used to train the model, the latter was used to evaluate its performance based on squared error metrics. Reduced classification error, however, often means the obtained model parameters are overfitted to the calibration samples, so the final classifier should fail on general sample sets; therefore, all training sessions were limited to a maximum of 100 epochs. Moreover, as the classifier performance is susceptible to the nature of training and validation samples, 10-fold cross-validation was implemented to better evaluate its performance, as all calibration samples would be validation data at least once.31 As the classifier parameters would be the same in this analysis as well as in practical operation, the performance that is calculated from 10-fold cross-validation can be further extended for any other application.
During the calibration session, the sensor records the samples regarding the 9 supported postures (“Experimental protocol” subsection), by running 6 separate repetition sets, in each of which all 9 postures were performed and held for 2 seconds each, with a sampling rate of 100 Hz. Considering the interval for the transition from 1 posture to another, the samples in between do not definitely belong to either. Therefore, the first and last few milliseconds of each 2-second interval were disregarded for the calibration set, totalizing 1067 samples per posture and 9603 samples per set.
The recorded calibration samples comprise all postures that the sensor supports, so it is possible to visualize the behavior of the FMG signals pattern for the postures by fitting these samples into a joint uniform distribution.32 Moreover, this analysis provides a piece of crucial information: the output voltage of the transducers for the most and the least loaded configuration. As the former corresponds to the minimum and the latter to the maximum value the transducers can measure, they are hereby called 0-level and 1-level input values.
Although it is known beforehand which calibration samples belonged to which posture, external noises could produce a sudden voltage rise or drop. Considering that sudden and drastic posture changes hardly happen within the 0.1 seconds between one and next sample, a sliding window of 5 seconds collects multiple samples at once so the algorithm can evaluate whether the sequence of predicted postures is befitting to the posture variation behavior along that 5-second interval and chose to disregard a calibration sample if not.
Once the sensor is calibrated, the classifier can recover the posture information from a given sample. To avoid eventual uniform voltage shifts from the power supply, all input samples are rescaled within the range of 0 to 1 with the same differential voltage-level behavior among the postures, so the classifier will always have a consistent data set to work on. The minimum and maximum reference values correspond to the 0-level and 1-level values calculated during the calibration.
Experimental protocol
The proposed sensor was tested and validated for 9 postures, as shown in Figure 4A, selected for their common usage in gesture recognition researches. Posture N corresponds to the neutral position, both the hand and the wrist are relaxed. Posture A is the closed fist and posture B is the open hand, the wrist is kept relaxed in both. Postures C and D, on the contrary, are wave-in and wave-out positions, with extended thumb and fingers. Posture E is similar to posture A, has flexed fingers but an extended thumb. Posture F, by its turn, is characterized by a relaxed wrist and a pointing index finger. Likewise, posture G has a relaxed wrist and the thumb, and the ring and pinky fingers are flexed. At last, posture H is similar to posture B, with the difference that the former draws the thumb and all fingers together.
Figure 4.
Computer generated environment: (A) Postures A to H plus N of the experimental protocol and (B) virtual reality room for visual feedback.
It was asked from 5 healthy volunteers with the average age of 28.4 ± 6.3 years and limb dimensions as shown in Table 1 to sit comfortably on a chair and perform the 9 postures in a predefined shuffled sequence, holding each posture for ~2 seconds. To aid the volunteers, a video track with the selected sequence is shown to them on a computer screen, so they know easily what posture to perform next. The video was created inside a virtual reality environment specially developed for this sensor that can be further explored as visual feedback of the sensor output (Figure 4B). The experimental procedures were duly explained beforehand to the volunteers, and all experiments were conducted following the Ethical Committee recommendations (CAAE 17283319.7.0000.5404).
Table 1.
Limb dimensions of the volunteers.
| Subject | L (cm) | C1 (cm) | C2 (cm) |
|---|---|---|---|
| 1 | 23.1 ± 0.3 | 25.8 ± 0.3 | 17.3 ± 0.2 |
| 2 | 24.9 ± 0.4 | 29.9 ± 0.3 | 18.7 ± 0.2 |
| 3 | 24.0 ± 0.2 | 35.0 ± 0.1 | 21.2 ± 0.2 |
| 4 | 21.1 ± 0.1 | 35.1 ± 0.2 | 22.6 ± 0.2 |
| 5 | 25.0 ± 0.1 | 32.0 ± 0.3 | 20.2 ± 0.3 |
L: length from the elbow to the wrist; C1: circumference of the forearm where the transducer T1 is attached; C2: circumference of the forearm where the transducer T2 is attached.
Two transducers were placed on to the user forearm, 1 per channel (Figure 5). The transducer T1 was attached to the posterior side of the forearm and mainly monitors the activities of flexor sublimis digitorum and flexor profundus digitorum muscles, responsible for fingers flexion and wrist flexion/extension. The effect of the extensor digitorum, responsible for fingers and wrist extension, though subtle, might be sensed as well. The transducer T2, on the contrary, was attached on the anterior side of the forearm, near the wrist, and mainly monitors the flexor pollicis longus, responsible for thumb flexion.
Figure 5.

Positions of transducers T1 and T2 on user forearm.
Results and Discussions
Sensor validation
First, the sensor was evaluated for sensitivity by monitoring the voltage signal from channels 1 and 2, connected to transducers T1 and T2, respectively, while the volunteer performed the sequence of postures. Figure 6 shows the collected signals after normalization and low-pass filtering. As it can be observed, each posture is described by a distinct behavior from the pair of transducers, so the sensor is sensitive to posture changes indeed. Also, one can note that the first few samples in each interval form sudden spikes, after which the signal stabilizes. This behavior is common to both channels and was expected as it characterizes the transition from the previous to the next posture.
Figure 6.

Normalized voltage signal captured by channels 1 and 2 for postures A-H plus N.
Analyzing the stabilized voltage curve in Figure 6, one notes that both T1 and T2 showed the highest values for posture N. It is natural to be so, as posture N is the relaxed position and therefore applies no additional load upon the transducers.
The closed fist activates all flexors at once, reflecting on the voltage drop at both T1 and T2 for posture A. The higher drop observed on T1 is coherent to the literature as the load over it is increased by the contraction of both finger flexors, whereas T2 monitors only the thumb flexor. Similarly, the transducers showed the same behavior for posture E, which differs from posture A only for the thumb, and the voltage levels were also alike. Though the thumb is not flexed in posture E, the voltage levels in T1 and T2 were a little lower than the ones for posture A, probably due to the thumb extensors on the posterior side of the forearm, one near the wrist and another at the middle of the forearm length. These muscles are not directly under the transducers; nevertheless, their contraction adjusts the muscle layers around the forearm and causes a subtle drop on the sensor output compared with posture A.
The transducer T1 showed a higher output than T2 for posture B, mainly because all finger flexors, as well as the extensors, were relaxed, which explains why T1 showed nearly none voltage drop. On the contrary, the T2 signal showed a significant drop as a consequence of thumb and fingers extensors, all found on the back of the forearm. Posture H is comparable with posture B, both showed the same level response from T2. The transducer T1, on the contrary, showed a voltage drop for posture H due to digits adduction. Postures C and D held the thumb and fingers in the same configuration as in posture H; nonetheless, the voltage signal showed a very distinct behavior. This can be explained by the wrist that no longer is kept in a neutral position but is rather flexed or extended. Therefore, the signal from T1 is significantly reduced for both, and the levels on T2 are adjusted accordingly to the performed posture.
As posture F had the thumb and 3 flexed fingers, its output is expected to be like posture A and E. Indeed, the signal from T1 is low due to the flexed fingers, showing that the index finger alone was not enough to reduce the load over T1 when it is not flexed. Moreover, the transition interval apart, the T1 signal should stabilize about the same level as posture E, and the signal from T2 showed a subtle drop due to the flexed thumb and extended index finger as both movements are controlled by muscles near the wrist. Posture G, compared with posture F, showed a drop at the T2 signal, because of the degree of flexion on the thumb. Whereas the thumb tip rested at the flexed middle finger in posture F, it now rests at the ring finger. The signal from T1, on the contrary, showed a significant rise as a result of the combined action of index and middle fingers extension.
Posture classification
The experimental samples from all volunteers were input into the classifier, then the predictions were compared with the true condition. The classifier performance is shown in a confusion matrix, wherein each row and each column correspond to the predicted and actual posture, respectively (Figure 7). The main diagonal, consequently, indicates the true positive classifications, whereas all other matrix elements, in this case, refer to false-positive classifications. The results in Figure 7 are the average of all experimental samples from all volunteers. Each matrix element holds the percentage of the samples predicted as given posture, according to its row value, considering that they actually belonged to given posture, according to its column value. The matrix elements whose numeric value is not explicitly shown are zeros, for the sake of simplicity.
Figure 7.

Confusion matrix of the posture classifier as an average among all subjects. The numbers indicate the percentage of true and false positive classifications. The matrix elements without explicit numeric value indication equal to zero.
From Figure 7, one notices that the classifier showed an acceptable performance for all 9 postures of the experimental protocol, 6 of which had all samples correctly classified. Posture F had the lowest per-class precision, fairly with 97.7% of true positive classifications, however. A small percentage of posture A samples were misclassified as belonging to posture F and vice versa, which is understandable considering the similarity between the signal output from T1 and T2 for these postures. On the contrary, a small percentage of posture E predictions actually belonged to posture B, nevertheless they showed very distinct behavior from T1 and T2 signals. The misclassified samples probably belonged to the transition interval, which caused ambiguity before the transducer signal could stabilize into the characteristic pattern.
Finally, considering yet the number of true and false classifications, the classifier was evaluated for the general precision and accuracy among all users and postures, reaching an average precision of 99.5 ± 0.87% and accuracy of 99.8 ± 0.23%, proving the designed sensor to be valid and reliable.
One must remember, however, that the sensor must be calibrated whenever it is put on by the user, even if the calibration has already been performed on the same user before because one cannot guarantee that the transducers are placed exactly on the same spot as they were on the last calibration. This procedure is essential to keep the sensor high accuracy and precision as FMG signals pattern is susceptible to user-dependent physical characteristics despite the anatomical similarities, such as skin thickness, the muscular mass, and limb dimensions.23 However, given that the sensor was calibrated for all 5 test subjects, the final precision and accuracy rates were similar for all of them despite the differences in their limb dimensions, as listed in Table 1. If a universal calibration set has been adopted for the volunteers, the sensor should have failed in identifying the performed posture.
Although the calibration must be repeated for every user and usage, it is enough to run it once until the sensor is taken off. During the experiments, it has been observed that a calibration session takes on average 40 minutes to be completed as it trains and computes the optimal parameters of the posture classifier. Once they are found, on the contrary, the sensor can operate smoothly within the sampling rate of 100 Hz, because the preceding acquisition and prefiltering routines can be performed in a short period of time by hardware analog circuits or mathematical operations on Raspberry Pi 3. Considering yet that it is quite difficult for a human being to move drastically within 0.1 seconds, the established acquisition rate is enough for real-time applications in the future.
Conclusions
An optical fiber–based FMG sensor to recover hand posture was demonstrated in this work. The low cost and user mobility are the highlights of the proposed design, as the transducer of each channel can be easily fabricated via 3-dimensional printing, and the processing unit, the LEDs, and the photoreceptors have reduced and portable size. The final classifier showed an average precision and accuracy of ~99.5% and ~99.8%, respectively. These values were obtained across 9 postures and validate the sensor feasibility with only 2 channels of operation. It is concluded, thus, that the proposed system has potential uses in indoor and outdoor environments as user interface in assistive technology and virtual reality applications.33,34 Further developments will focus on the identification of dynamic gestures, including the elbow and shoulder motions, as one hardly keeps them forever static in practical applications.
Footnotes
Funding:The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported in part by the Sao Paulo Research Foundation (FAPESP) under Grant 2017/25666-2, in part by CNPq, and in part by CAPES under Finance Code 001.
Declaration of conflicting interests:The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Author Contributions: E.F. conceived the original idea and the experiments. P.M.L. fabricated the opto-mechanical transducers. M.K.G. and W.H.A.S. conducted the experiments.Y.T.W. analyzed the results and wrote the paper. All authors discussed the results and reviewed the manuscript.
ORCID iDs: Yu Tzu Wu
https://orcid.org/0000-0002-2623-2526
Eric Fujiwara
https://orcid.org/0000-0001-8169-9738
References
- 1. Bouteraa Y, Abdallah IB, Elmogy AM. Training of hand rehabilitation using low cost exoskeleton and vision-based game interface. J Intell Robot Syst. 2019;96: 31-47. [Google Scholar]
- 2. Cho E, Chen R, Merhi LK, Xiao Z, Pousett B, Menon C. Force myography to control robotic upper extremity prostheses: a feasibility study. Front Bioeng Biotechnol. 2016;4:18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Murthy GRS, Jadon RS. A review of vision based hand gestures recognition. Int J Inf Technol Knowl Manag. 2009;2:405-410. [Google Scholar]
- 4. Rautaray SS, Agrawal A. Interaction with virtual game through hand gesture recognition. Paper presented at: 2011 IEEE MSPCT; December 17-19, 2011; Aligarh, India https://ieeexplore.ieee.org/document/6150485. [Google Scholar]
- 5. Dipietro L, Sabatini AM, Dario P. A survey of glove-based systems and their applications. IEEE Trans Syst Man Cybern Syst. 2008;38:461-482. [Google Scholar]
- 6. Chossat JB, Tao Y, Duchaine V, Park Y-L. Wearable soft artificial skin for hand motion detection with embedded microfluidic strain sensing. Paper presented at: 2015 IEEE ICRA; May 26-30, 2015; Seattle, WA https://ieeexplore.ieee.org/document/7139544. [Google Scholar]
- 7. Silva AF, Gonçalves AF, Mendes PM, Correia JH. FBG sensing glove for monitoring hand posture. IEEE Sens J. 2011;11:2442-2448. [Google Scholar]
- 8. Rautaray SS, Agrawal A. Vision based hand gesture recognition for human computer interaction: a survey. Artif Intell Rev. 2015;43:1-54. [Google Scholar]
- 9. Kurakin A, Zhang Z, Liu Z. A real time system for dynamic hand gesture recognition with a depth sensor. Paper presented at: 20th EUSIPCO; August 27-31, 2012; Bucharest, Romania https://ieeexplore.ieee.org/document/6333871. [Google Scholar]
- 10. Chen FS, Fu CM, Huang CL. Hand gesture recognition using a real-time tracking method and hidden Markov models. Image Vision Comput. 2003;21:745-758. [Google Scholar]
- 11. Craelius W. The bionic man: restoring mobility. Science. 2002;295:1018-1021. [DOI] [PubMed] [Google Scholar]
- 12. Castellini C, Artemiadis P, Wininger M, et al. Proceedings of the first workshop on peripheral machine interfaces: going beyond traditional surface electromyography. Front Neurorobot. 2014;8:22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Rasouli M, Ghosh R, Lee WW, Thakor NV, Kukreja S. Stable force-myography control of a prosthetic hand using incremental learning. Paper presented at: IEEE 37th EMBC; August 25-29, 2015; Milan, Italy https://ieeexplore.ieee.org/document/7319474. [DOI] [PubMed] [Google Scholar]
- 14. Xiao ZG, Elnady AM, Menon C. Control an exoskeleton for forearm rotation using FMG. Paper presented at: IEEE 5th BIOROB; August 12-15, 2014; Sao Paulo, Brazil https://ieeexplore.ieee.org/document/6913842. [Google Scholar]
- 15. Ravindra V, Castellini C. A comparative analysis of three non-invasive human-machine interfaces for the disabled. Front Neurorobot. 2014;8:24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Naik GR, Al-Timemy AH, Nguyen HT. Transradial amputee gesture classification using an optimal number of sEMG sensors: an approach using ICA clustering. IEEE Trans Neural Syst Rehabil Eng. 2016;24:837-846. [DOI] [PubMed] [Google Scholar]
- 17. Ferigo D, Merhi LK, Pousett B, Xiao ZG, Menon C. A case study of a force-myography controlled bionic hand mitigating limb position effect. J Bionic Eng. 2017:14:692-705. [Google Scholar]
- 18. Nissler C, Mouriki N, Castellini C. Optical myography: detecting finger movements by looking at the forearm. Front Neurorobot. 2016;10:3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Wu YT, Fujiwara E, Suzuki CK. Optical myography sensor for gesture recognition. Paper presented at: IEEE 15th AMC; March 9-11, 2018; Tokyo, Japan https://ieeexplore.ieee.org/document/8371116. [Google Scholar]
- 20. Wu YT, Fujiwara E, Suzuki CK. Evaluation of optical myography sensor as predictor of hand postures. IEEE Sens J. 2019;19:5299-5306. [Google Scholar]
- 21. Jiang X, Merhi LK, Menon C. Force exertion affects grasp classification using force myography. IEEE Trans Human: Mach Syst. 2018;48:219-226. [Google Scholar]
- 22. Fujiwara E, Wu YT, Suzuki CK, de Andrade DTG, Neto AR, Rohmer E. Optical fiber force myography sensor for applications in prosthetic hand control. Paper presented at: IEEE 15th AMC; March 9-11, 2018; Tokyo, Japan https://ieeexplore.ieee.org/document/8371115. [Google Scholar]
- 23. Xiao ZG, Menon C. A review of force myography research and development. Sensors. 2019;19:4557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Jiang X, Merhi LK, Xiao ZG, Menon C. Exploration of force myography and surface electromyography in hand gesture classification. Med Eng Phys. 2017;41: 63-73. [DOI] [PubMed] [Google Scholar]
- 25. Fujiwara E, Wu YT, Santos MFM, Schenkel EA, Suzuki CK. Identification of hand postures by force myography using an optical fiber specklegram sensor. Paper presented at: OFS24; September 28-October 2, 2015; Curitiba, Brazil https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9634/1/Identification-of-hand-postures-by-force-myography-using-an-optical/10.1117/12.2194605.short?SSO=1. [Google Scholar]
- 26. Fujiwara E, Suzuki CK. Optical fiber force myography sensor for identification of hand postures. J Sensors. 2018;2018:8940373. [Google Scholar]
- 27. Fujiwara E, Wu YT, Santos MFM, Schenkel EA, Suzuki CK. Optical fiber specklegram sensor for measurement of force myography signals. IEEE Sens J. 2017;17:951-958. [Google Scholar]
- 28. Berthold JW. Historical review of microbend optical-fiber sensors. J Lightwave Technol. 1995;13:1193-1199. [Google Scholar]
- 29. Fujiwara E, Wu YT, Villela CS, Gomes MK, et al. Design and application of optical fiber sensors for force myography. Paper presented at: 2018 SBFoton IOPC; October 8-10, 2018; Campinas, Brazil https://ieeexplore.ieee.org/document/8610923. [Google Scholar]
- 30. Chollet F, et al. Keras. https://github.com/fchollet/keras. Updated 2015.
- 31. Rodriguez JD, Perez A, Lozano JA. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans Pattern Anal Mach Intell. 2010;32: 569-575. [DOI] [PubMed] [Google Scholar]
- 32. Figliola RS, Beasley DE. Theory Design for Mechanical Measurements. 5th ed. Hoboken, NJ: Wiley; 2011. [Google Scholar]
- 33. Fajardo J, Ribas Neto A, Silva WHA, Gomes M, Fujiwara E, Rohmer E. A wearable robotic glove based on optical FMG driven controller. Paper presented at: IEEE 4th ICARM; July 3-5, 2019; Toyonaka, Japan https://ieeexplore.ieee.org/document/8834067. [Google Scholar]
- 34. Fujiwara E, Wu YT, Gomes MK, Silva WHA, Suzuki CK. Haptic interface based on optical fiber force myography sensor. Paper presented at: 2019 IEEE VR; March 23-27, 2019; Osaka, Japan https://ieeexplore.ieee.org/document/8797788. [Google Scholar]


