INTRODUCTION
Restraint performance is evaluated using anthropomorphic test devices (ATD) positioned in prescribed, optimal seating positions. Anecdotally, human occupants, children in particular, have been observed to assume a variety of positions that involve changes in posture and alterations in seat belt placement and geometry which may potentially affect restraint system performance. In our previous research, we have described these position and posture differences using conventional video recording and analysis methods. These efforts, while being critically important for defining the nature and magnitude of the problem, have been largely qualitative, and have identified the presence or absence of out-of-position (OOP) and the direction of OOP (i.e. leaning forward out of the restraint). Furthermore, data analysis has been resource intensive. As a result, there is a need to evolve this methodology to be more quantitative both in order to streamline the data analysis process and to obtain precise body position data that can be used to develop countermeasures to mitigate particularly harmful positions and postures. Thus, the objective of this study was to develop and trial an innovative data collection and analysis method, using Microsoft Kinect™, to determine the naturalistic positions of child occupants while restrained in cars using quantitative techniques.
METHODOLOGY
Techniques were developed to collect quantitative data on child posture and position while restrained in the rear seat of two instrumented study vehicles. The vehicles are large sedans and will be loaned to families for a two week data collection period for naturalistic observation of child behavior during typical driving trips. In addition to conventional data acquisition system and video cameras, the Microsoft Kinect™ system, composed of an RGB camera and depth sensor, was installed into both vehicle environments to provide 3D motion capture of the rear seat outboard occupants. The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures motion data in 3D under any ambient light conditions. The data streams are utilized in a skeletal tracking mode to provide the 3D location (relative to the sensor) of the head, neck, and shoulders of up to two seated rear row occupants. When utilized in the naturalistic environment, data from the Kinect system can be synchronized with the other data streams from the data acquisition system (braking, speed, steering) and video cameras by matching the time stamps on each data stream.
The accuracy of the Kinect™ system in quantifying left/right and fore/aft movements was assessed via the following approach:
Left/right - Two strings were suspended from the ceiling in the coronal plane relative to the test subject, spaced 63cm apart, and placed directly in front of the test subject. The test subject aligned the center of their nose and body with one string and then moved laterally to the other string and then returned to the initial position. The test was repeated nine times by a single test subject.
Fore/aft - One string was suspended from the ceiling, 61cm in front of the wall, in the sagittal plane of the subject. Standing upright against the wall, the test subject moved forward to the string, and back against the wall. The test was repeated nine times by a single test subject.
Data from the Kinect™ system was processed using customized software and compared to the known excursions. IRB approval was obtained from Monash University.
RESULTS
The Kinect™ system provided a consistent assessment of initial position – the standard deviation of the initial position ranged from 0.8–1.5 cm. The error between the Kinect™ measured distance and the actual measured distance ranged from 1.2–3.1 cm which corresponded to 2.0–4.9%.
The primary limitation of this data is the lack of a true gold standard for measurement of reference distance. Although the strings were placed with precision, the movement of the test subject has some variability that is not captured in the data collection. The error reported in the Table above is a combination of the error of the Kinect™ system as well as the variability in the test subject’s actual movement.
Table 1:
Absolute and percentage error of the Kinect™ system
| Left/right (63 cm reference movement) | |||||
| Initial position as measured by Kinect™ (cm) (Mean±SD) | Ending position as measured by Kinect™ (cm) (Mean±SD) | Total distance as measured by Kinect™ (cm) (Mean±SD) | Absolute Error (cm) (Mean±SD) | % Error (Mean±SD) | |
| Head | −31.1±0.8 | 30.6±0.8 | 61.6±1.5 | 1.7+1.0 | 2.7±1.6% |
| Center of Shoulders | −29.9±1.5 | 29.9±0.6 | 59.9±1.7 | 3.1+1.7 | 4.9±2.7% |
| Fore/aft (61 cm reference movement) | |||||
| Head | 111.5±1.2 | 174.4±1.2 | 62.9±1.6 | 2.2±1.1 | 3.6±1.8% |
| Center of Shoulders | 110.3±1.6 | 171.3±0.4 | 61.0±1.5 | 1.2±0.75 | 2.0±1.2% |
CONCLUSIONS
Kinect™ can provide a reasonably accurate quantification of movement similar to what would be expected in a vehicle without the need to apply markers to the subject of interest. Errors were less than five percent. Implementation of this novel data collection method will provide acceptable quantitative data on the motion of rear seat occupants in naturalistic riding settings. The motion data can be processed to serve as a screening tool to help researchers identify relevant segments of the video stream for future analysis. As a result, this method will improve the efficiency of naturalistic data analysis for posture and position information and ensure the collection of quantitative data which can complement other qualitative data for the development of countermeasures.
Acknowledgments
This research was supported by the Australian Research Council LP110200334 and the Center for Child Injury Prevention Studies, a National Science Foundation Industry/University Cooperative Research Center at the Children’s Hospital of Philadelphia and Ohio State University.
