Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jul 14.
Published in final edited form as: Proc IEEE Annu Northeast Bioeng Conf. 2015 Apr;2015:10.1109/NEBEC.2015.7117174. doi: 10.1109/NEBEC.2015.7117174

A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation

Yuecheng Li 1, Wenyan Jia 1, Tianjian Yu 1, Bo Luan 2, Zhi-hong Mao 2, Hong Zhang 4, Mingui Sun 1,2,3
PMCID: PMC4501490  NIHMSID: NIHMS706136  PMID: 26185409

Abstract

In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

Keywords: heterogeneous system, Zynq, wearable multi-sensor system, low power, activity evaluation, health, wellness

I. Introduction

Chronic diseases are the greatest threats to the health of Americans, causing 1.7 million lives (or 70% of all deaths) annually [1]. Research has found that between 70% and 90% chronic disease risks are associated with environmental and lifestyle factors [2][3]. Therefore, it is important to develop a wearable electronic system that evaluates human activity, including diet, physical activity, and lifestyle. Traditional ARM based wearable systems suffer from limited data processing capacity and restricted real-time performance [4].

In order to improve performance, parallel data acquisition and processing are effective strategies to handle multiple sources of data. We have developed a chest-worn multi-sensor system as shown in Fig. 1 for human activity evaluation. It contains the following major functional modules: 1) a wireless module communicating with a smartphone, other wearable devices (if any) and/or the Internet using the Bluetooth or Wi-Fi; 2) a human-machine interface module including a miniature display and a vibrator; 3) an imaging module with up to four cameras to acquire stereo and/or wide view-angle images mainly for effective dietary activity recording in a short imaging range; 4) a barometer and a 9-axis Inertial Motion Unit (IMU) for body posture and body motion measurements. These functional modules are supported by a parallel data processing architecture as described below.

Fig.1. Wearable multi-sensor device.

Fig.1

II. Design and Optimization

We explored a novel heterogeneous architecture using the Zynq SoC which features a dual-core ARM Cortex-A9 based processing system (PS) and a FPGA programmable logic (PL) [5]. The PS and PL share the same package and are interconnected internally, which reduce the complexity of circuit board design and the space requirement. Taking advantages of this special SoC, our multi-sensor system is designed into two independent, but gluelessly connected parts corresponding to the PS and PL. All four cameras modules and sensors are connected to the PL so that parallel capabilities of the FPGA can be utilized to acquire, compress, and process real-time data. On the other hand, the wireless and the human-machine interface modules are managed by the Linux operating system of the PS. All boot files and the captured data are stored conveniently in a single MicroSD card which can hold data up to 32GB. Although the low-power DDR2 memory is controlled by the PS, it can be instantly accessed by the PL to buffer or exchange data with the PS through a 64-bit High Performance AXI bus (HP AXI).

Shown in Fig. 2 is a detailed block diagram of our system structure. After booting both the PS and PL, the PS is first initiated from the system software stored on the MicroSD. Then, the parameters for the PL part are initiated, including selections of the camera modules, frame rate, JPEG compression quality, on/off status of sensors. According to the loaded parameters, a timer periodically triggers image captures from multiple camera modules. The JPEG encoder starts working automatically once image data are available in the buffer. Simultaneously, the data from sensors are streamed to the buffer synchronized with the image sequences. The JPEG files and the organized sensor data are then transferred directly from the buffer through the HP AXI bus to a preserved memory section residing in the LPDDR2 memory without interrupting the PS. All data are finally read by the Linux application routine on the PS and saved on the MicroSD card.

Fig.2. System structure implemented for test.

Fig.2

Though high real-time performance is promised by this heterogeneous system structure, dynamic power consumption is a critical problem. This problem must be solved considering both software and hardware. The power consumption of the PS, which is dominated by the operating frequency and application load, can be controlled by lowering this frequency and optimizing software, similar to those performed on other ARM based processors. Thanks to the parallel processing of the PL, the workload of the PS for real-time tasks are reduced substantially. Data management and storage, which can be handled well by the Linux operating system, are the only remaining major tasks for the PS. Thus, the PS and LPDDR2 can run at a much lower frequency, reducing power consumption significantly.

However, the dynamic power consumption of PL has to be investigated specifically. As described in [6], this part of the power consumption is highly related to the resource utilization, operation voltage, and FPGA operating frequency. We use the following strategies for power optimization: 1) The four cameras are initiated one-by-one using the same camera configuration module. This initialization strategy does not affect performance significantly because it takes much longer for the Linux operating system to finish the booting procedure than the camera initiation. Although software initialization running on the PS could eliminate the use of PL resources, it increases system booting time and is inefficient to cooperate the camera control module in the PL part. 2) To save power, the camera control is realized by turning off camera modules when they are idle. 3) A single resource-optimized JPEG encoder is implemented in the PL side for compressing images from multiple camera modules. Tradeoffs between resources, speed, and memory efficiency are well balanced. 4) Simplified read/write modules are designed according to the protocol of the AXI Bus to save PL resources instead of using the standard IP core. High-performance data transfer is accomplished at the minimum use of resources. 5) Taking advantage of PL-based data streaming, the computational tasks in the PL side are run at a frequency as low as 48MHz which not only reduces power consumption but also stabilizes the system. 6) All camera modules and sensors are designed to operate at a 1.8V LVCMOS instead of a common 3.3V LVCMOS to save the IO power. In addition, low IO drive strength, 4 mA or 8mA, is chosen by setting corresponding IO constraints.

III. Results and Discussion

The Zynq chip adopted for the eButton is XC7Z020. For the test functions implemented in the PL as shown in Fig. 2, all resources utilization were less than 20%. The Linux system on PS run at 100MHz and LpDDR2 at 200MHz. Our performance test indicated that, with the optimized JPEG encoder running at 100MHz on the PL, the frame rate for acquiring 1280×3840 color images (which was a vertically appended image from 4 synchronized 1280×960 color images) was 0.5fps, and the frame rate increased to 11fps for VGA color images. At a 48MHz clock rate, the sampling rates for recording the barometer and motion sensors were 16 samples/sec and 80 samples/sec, respectively. With a single recharge of a 1,000 mAh Li-ion battery, our wearable system lasted for 3.5 hours (with the wireless module turned off).

By comparison, our previous ARM based wearable system had much lower real-time performance [7]. Although it could last between six and eight hours, due to the performance restricted JPEG engine, the maximum compression performance was just 4fps for the VGA image with the CPU and the JPEG engine running at 266MHz and 66MHz, respectively. In addition, all sensor data (e.g., the IMU data) were recorded in the serial form with images by the Linux software application at a sampling rate limited to 30 samples/sec, which may not be sufficient for evaluating vigorous human activities. Our new heterogeneous structural design overcomes these problems.

IV. Conclusion

A low power wearable multi-sensor system is designed based on the heterogeneous structure of the Zynq Soc. Our new design delivers powerful real-time performance with a great potential for evaluating human activities, including diet, physical activity and lifestyle.

Acknowledgments

This work is supported by National Institutes of Health grants R01CA165255, and R21CA172864.

References

RESOURCES