Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2022 Apr 12;22(8):2962. doi: 10.3390/s22082962

Localization of Stereovision for Measuring In-Crash Toeboard Deformation

Wei Zhang 1,*, Tomonari Furukawa 1, Azusa Nakata 2, Toru Hashimoto 2
Editors: Nicola Donato, Luca Lombardo, Giovanni Gugliandolo
PMCID: PMC9028871  PMID: 35458950

Abstract

This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement. The proposed technique designed a sensor suite to install not only the stereo camera but also initial measurement units (IMUs) and a camera for localizing purpose. The pose of the stereo camera is recursively estimated using the measurement of IMUs and the localization camera through an extended Kalman filter. The performance of the proposed approach was first investigated in a stepwise manner and then tested in controlled environments including an actual vehicle crash test, which had successfully resulted in measuring the toeboard deformation during a crash. With the oscillation motion in the occurrence of the crash captured, the deformation of the toeboard measured by stereo cameras can be described in a fixed coordinate system.

Keywords: toeboard deformation measurement, localization, recursive estimation, crash test, in-crash measurement

1. Introduction

The incidence of vehicle crashes keeps increasing with the production of vehicles, and more than six million crashes are reported every year these days in the United States, about thirty percent of which come with fatality and injury [1]. Vehicle crash is a severe and complicated dynamic phenomenon due to the complexity of physical interaction between the structures resulting from the impact. It is indispensable that the vehicles be crash-tested in indoor facilities and that the test results be used to design vehicles with improved crashworthiness [2,3,4,5,6]. In the crash tests, one of the focuses is laid on the toeboard deformation [7,8], which is due to the strong correlation between toeboard intrusion and lower extremity injuries, as seen both from collision statistics [9,10] and crash simulation [11]. A detailed analysis of toeboard deformation during a crash test can disclose useful intricacies about the otherwise unknown effect of the crash phenomenon.

In general, past work on vehicle deformation measurement can be classified into two approaches. In the first approach, finite element analysis (FEA) and other computational mechanics analyses have been used to predict instead of measuring the deformation. The advantage of numerical analysis is its ability to simulate all types of crash tests ([12,13,14,15]). Hickey and Xiao [16] used the three-dimensional (3D) model of a commercial vehicle and performed FEA to examine the effects of the deformation during a car crash test. Employing the full-scale 3D model directly for crash analysis and subsequent computational design requires large computational loads. Yang et al. [17] utilized the response surface method for a complex and dynamic large deformation problem to accelerate the analysis and design process. Pre- and post-measurements from the actual vehicle crash tests were used by Cheng et al. [18] and McClenathan et al. [19] to provide boundary conditions for FEA. Zhang et al. [20] reported the reconstruction of the deformation of a vehicle in a crash accident using high-performance parallel computing to incorporate an advanced elastic–plastic model. While the FEA has grown to be ever more important, its ability to estimate detailed deformations such as toeboard deformation is significantly limited because of the presence of various computation errors and the lack of actual measurements.

The second approach is based on the direct measurement. In the approach, vision-based measurement became most popular because of the ability of the camera to capture information of a continuous field ([21,22,23,24,25]). Digital image correlation, originally proposed by Sutton et al. [26], projects an electronic speckle pattern to the specimen to derive the deformation of the specimen from the displacement of the image of the speckle pattern on the specimen, and it has been widely employed to measure small deformation such as structure strain [27], cracks identification [28,29], or vibration of structures [30,31], in a static environment. Schmidt et al. [32] employed a stereo vision-based technique to measure the time-varying deformation of specimens in gas gun impact tests. As an alternative to the digital image correlation (DIC), Iliopoulos and Mchopoulos [33] developed the mesh-free random grid technique to measure deformation using marked dots, which was later referred to as Dot Centroid Tracking (DCT) for comparison with the DIC [34]. These techniques observed the deformation of a surface fixed to a static base from a static viewpoint. In general, while deformation measurement was well studied, the techniques cannot be applied to toeboard deformation measurement directly because of the oscillatory motion of the cameras. Since the line of sight of the toeboard is significantly limited, the cameras must be fixed near the toeboard to some part of the vehicle body, which deforms and oscillates by the crash. The measurement of toeboard through a stereo camera is therefore subject to the relative motion of the camera itself.

This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement and a design to implement the technique in vehicle crash tests using a recursive state estimation [35,36,37]. This state estimation technique further completes our previous work that localizes the sensor suite using only the camera pose obtained through a computer vision technique [7], as the excessive oscillation of the localization camera due to the non-rigidity of the sensor suite poses extra errors in the measured toeboard deformation. In addition to the cameras for deformation measurement and localization, the proposed technique uses additional inertial measurement units (IMUs) for the localization. Having all the sensors in a rigid structure, the proposed technique implements extended Kalman filter (EKF) and estimates the pose of the stereo camera in the global coordinate frame using the observations of the downward camera and IMUs. In-crash toeboard deformation can thus be measured by subtracting the estimated stereo camera pose from the observed toeboard deformation. A sensor suite comprising the stereo camera, downward camera, and the IMUs is designed and placed on the seat mount such that the rigid body assumption is valid. The downward camera is located underneath the vehicle floor by making a hole.

This paper is organized as follows. The next section defines the state estimation problem of concern. Section 3 presents the proposed EKF-based technique to localize the stereo camera for the in-crash toeboard deformation measurement. Section 4 introduces the hardware design for our approach. The experimental results are then presented in Section 5. Conclusions and ongoing work are summarized in Section 6.

2. Sensor Suite and Localization Problem of Stereovision

2.1. Problem Formulation

Figure 1 illustrates the fundamental design of the sensor suite adopted in this paper and the major components that address the problem of localizing the stereo camera. The stereo camera is fixed to the camera fixture, which is assumed to be rigid, and located such that it sees the toeboard. Additionally fixed to the camera fixture are the downward camera and the IMUs adjacent to the cameras, and a checkerboard is taped to the floor to localize the downward camera. The IMUs each consist of a triaxial gyroscope and an accelerometer. While the stereo camera measures the toeboard deformation in its camera coordinate frame, the downward camera and the IMUs are the sensors available to localize the pose of the stereo camera. The stereovision localization problem is thus defined as identifying the stereo camera pose given the images of the downward camera and the readings of the IMUs.

Figure 1.

Figure 1

Sensor suite and localization problem formulation.

2.2. Linear Acceleration of Sensor Suite

In accordance with the rigid body dynamics, the acceleration (a) at the positions of IMUs is related to that of the center of the sensor suite by

a=ac+ω˙×r+ω×(ω×r), (1)

where ac is the acceleration of the center of the sensor suite body, ω is the angular velocity of the sensor suite body frame, and r is the relative position of the IMU to the center.

Note that it is more convenient to describe the acceleration relation (1) in the body frame, and that the cross-product identity U(v1×v2)=(Uv1)×(Uv2) holds for unitary matrix UR3×3. By left multiplying the transformation matrix from the inertial frame to the body frame, the acceleration relation (1) can be rewritten as

aB=acB+ω˙B×rB+ωB×(ωB×rB), (2)

where aB, ω˙B, ωB and rB are the linear acceleration, angular acceleration, angular velocity, and position in the body frame (B).

The linear accelerations at different position of a rigid body need the input of angular acceleration. The input can be derived from the measurements of the triaxial accelerometers and gyroscopes in the IMUs. Because the derivation of angular acceleration at a point requires two IMU measurements, consider two IMUs in addition to the reference point as indicated in Figure 2. From Equation (1), the difference of the angular acceleration of IMU 1 from that of IMU 2 is given by

a2a1=ω˙×(r2r1)+12ω2×ω2×r212ω1×ω1×r1, (3)

where a1,a2,ω1, and ω2 are the accelerations and angular velocities at IMU 1 and 2, respectively. r1 and r2 are each the vector from the reference point to them. The position of the reference point for each pair of IMUs is chosen to be the midpoint of the IMU positions: r1=r2=12r1212r. This is to minimize the complexity and the error of transformation. The substitution of the midpoint into Equation (3) yields

C(r)ω˙=ba1a2+12ω2×(ω2×r)+12ω1×(ω1×r), (4)

where the skew-symmetric cross-product matrix C(r):R3R3×3 is defined as

C(r)=0r3r2r30r1r2r10, (5)

where r=(r1,r2,r3)T. The angular acceleration ω˙ can be computed through the IMU measurements of a1, a2, ω1, and ω2 by solving the linear problem (4).

Figure 2.

Figure 2

Illustration for angular acceleration using two IMUs.

2.3. Localization of Downward Camera

The installation of the downward camera and the checkerboard on the ground is because the ground is the closest static surrounding structure to the sensor suite and thus can be measured most accurately. As illustrated in Figure 3, the global frame can be set at one end of the checkerboard and used to localize the downward camera. The downward camera observes the checkerboard pattern and any other extractable features. By associating the pattern and features between the global frame and pixel frame, the pose of the downward camera with respect to the global frame can be identified after calibrating the camera parameters following the procedures proposed by Zhang [38] or Heikkilä and Silvén [39]. In particular, for the checkerboard attached to the ground, the checkerboard corner points are readily identified [40] and serve as the planar pattern for the camera pose measurement, as illustrated in Figure 3.

Figure 3.

Figure 3

Scheme for downward camera localization.

For the global localization of the downward camera, let the pixel coordinates of the corner point (i,j) at the kth time step be pk,ij. The pixel coordinates can be related to the corner point in the global frame by

pk,ijNk0+ij0w, (6)

where w is the width of tiles of the checkerboard. Nk is the number of tiles of the origin of the detected checkerboard at the current step relative to the origin at the first step, which can be accumulated by

Nk=Nk1+IntegerΔk+pk,0pk1,0Δ¯tile, (7)

where Δk is the motion between two images in the pixel coordinates, and pk,0 is the origin of the detected checkerboard origin in the pixel coordinate in image k. The initial N0 is (0,0)T.

The design and sensor settings adopted in this paper allow the motion of the sensor suite to be predicted with respect to the body frame and the pose of the downward camera to be measured in the global frame. While the downward camera is fixed to the sensor suite and thus can measure the sensor suite pose through the kinematic transformation, the pose predicted by the motion model and the angular acceleration measurement would be different from the pose measured by the downward camera and the kinematic transformation. The primary reason is the dead-reckoning errors stemming from the motion prediction with angular acceleration measurements. In addition, the pose of the downward camera alone contains excessive oscillations because of the non-rigidity of the sensor frame in the crash test. The acceleration measurement is in general noisy, and its integration creates notable integration errors, which accumulate over time in the name of dead-reckoning errors. Since the pose of the downward camera can be accurately measured because of its close distance to the ground, it is essential to estimate the pose of the sensor suite or the stereo camera by integrating the motion prediction and the observation. The next section presents the localization of a stereo camera proposed and formulated in the framework of EKF.

3. EKF-Based Localization of Stereo Camera for Toeboard Measurement

3.1. Overview

Figure 4 shows the EKF-based localization of the stereo camera proposed in this paper for toeboard deformation measurement. Since the camera fixture is assumed to be rigid and can transform the body frame to the stereo camera, to be identified are the position and orientation of the body frame, {p,θ}, where pR3 is the position of the origin of the body frame and θR3 is its Euler angle. To recursively estimate the motion of sensor suite using EKF framework, the state x{p,p˙,p¨,θ,ωB,ω˙B} includes not only the pose, but also velocity p˙, acceleration p¨, angular velocity ωB, and angular acceleration ω˙B in the body frame.

Figure 4.

Figure 4

Proposed recursive localization of stereovision for in-crash toeboard measurement.

In accordance with the EKF, the primary processes of estimation are the prediction and the correction. The prediction of the state at discrete time k, or the derivation of the mean xk|k1 and the covariance Pk|k1, is performed using the motion model of a sensor suite and its state estimated at k1, i.e., xk1|k1 and Pk1|k1. In the correction step, the predicted sensor suite state is corrected to xk|k and Pk|k by fusing the measurements of IMUs (ak and ωk), downward camera pose (pkC and θkC), and angular acceleration (ω˙k) described in Section 2.2 and Section 2.3. While different sensors are used for the same state, Kalman gain is adjusted through the statistic property of motion and sensor noise via their covariance Pk|k1 and Σk,v and provides the optimal estimate for each state in the proposed approach. Once the pose of the sensor suite is globally estimated, the deformation of the toeboard can be measured with respect to the global and any other coordinate frames.

The operation of the EKF needs motion and sensor models. The discrete motion model of the sensor suite and the observation model subjected to uncertainty are generically given by

xk=f(xk1,wk),andzk=h(xk,vk), (8)

where f(·) and h(·) are the respective motion and sensor models; wk and vk represent the motion model noise and sensor model noise, respectively. The subsequent two sections present the motion and sensor models developed in the proposed approach.

3.2. Motion Model of Sensor Suite

The motion of the sensor suite pose is determined by the motion and the deformation of the entire vehicle, which cannot be modeled and identified easily. Meanwhile, the sensor suite motion is constrained by the motion and the deformation of the vehicle. This means that the range of the sensor suite motion is bounded. With a short time step Δt, the proposed approach accordingly predicts the pose of the sensor suite by the random walk motion model as

pk=pk1+Δtp˙k1+12Δt2p¨k1+ϵk,p¨,p˙k=p˙k1+Δtp˙k1+ϵk,p¨,p¨k=p¨k1+ϵk,p¨,θk=θk1+ΔtE(θk)ωk1B+Δt(ω˙k1B+ϵk,ω˙),ωkB=ωk1B+Δtω˙k1B+ϵk,ω˙,ω˙kB=ω˙k1B+ϵk,ω˙, (9)

where ϵk,p¨,ϵk,ω˙ is a motion noise due to unresolved linear and angular acceleration, and θ=ϕ,θ,φ are the Euler angles corresponding to the roll, pitch, and yaw motion of the sensor suite. In the equation

E(θ)=1sin(ϕ)tan(θ)cos(ϕ)tan(θ)0cos(ϕ)sin(ϕ)0sin(ϕ)/cos(θ)cos(ϕ)/cos(θ), (10)

is the Euler angle rates matrix [41], whose multiplication with the body-fixed angular velocity yields Euler angle rates. Through linearization, the motion model can be encapsulated into a canonical form as:

xk=Fkxk1+Vkwk, (11)

where Fk and Vk are the Jacobian matrix of the motion and the motion noise term, respectively, and are given by

Fk=F109×909×9F2,k,andVk=V109×309×3V2,k, (12)

where

F1=I3ΔtI3Δt22I303I3ΔtI30303I3,F2,k=I3ΔtE(θk)Δt2E(θk)03I3ΔtI30303I3,V1=Δt22I3ΔtI3I3,V2,k=Δt2E(θk)ΔtI3I3.

wkN(0,Σk,w) is a motion noise, which is validly assumed to be Gaussian since Δt is small.

3.3. Sensor Models for Localization of Sensor Suite

3.3.1. Sensor Models of Accelerometer

According to Equation (2), the sensor model outputting linear acceleration with respect to the body frame can be approximated with Gaussian noise as

zk,a=hk,ap¨(xk,rB)+vaR(θk)Tp¨k+ω˙kB×rB+ωkB×(ωkB×rB)+va, (13)

where zk,a is the measurement of linear acceleration, rB is the position of accelerometer in the body frame, and vaN(0,Σa) is the measurement noise of the accelerometer. R(θ) is the rotation matrix [41] to transfer the body frame to the inertial frame.

Because of the association through integration, the measurement zk,a can also be represented with the velocity and the position. This additionally introduces two sensor models:

zk,a=hk,ap(xk,rB)+vaR(θk)Tpk2pk1+pk2Δt2+ω˙kB×rB+ωkB×(ωkB×rB)+va,zk,a=hk,ap˙(xk,rB)+vaR(θk)Tp˙kp˙k1Δt+ω˙kB×rB+ωkB×(ωkB×rB)+va. (14)

Let the measurements and the sensor models be described collectively as

zka=zk,azk,azk,a,hka(xk)=hk,aphk,ap˙hk,ap¨. (15)

The corresponding Jacobian matrix is written as

Hk,ahkax=R(θ)TΔt20303hk,apθkhk,ap¨ωkBC(rB)Δt0303R(θ)TΔt03hk,ap˙θkhk,ap¨ωkBC(rB)Δt030303R(θ)Thk,ap¨θkhk,ap¨ωkBC(rB), (16)

where C(·) is the skew symmetric matrix in Equation (5) and

hk,apθk=R(θ)Tδ2pkΔt2θ,hk,ap˙θk=R(θ)Tδp˙kΔtθ,hk,ap¨θk=R(θ)Tpkθ,hk,ap¨ωkB=C(ωkB×rB)C(ωkB)C(rB),

where δ represents the finite difference operation and δ2pk=pk2pk1+pk2 and δp˙k=p˙kp˙k1.

3.3.2. Sensor Models of Gyroscope

Because the gyroscope measures the angular velocity, the proposed approach constructs two sensor models, each outputting the angular velocity and the sensor suite orientation:

zk,g=hk,gω(xk,v)ωkB+vg,zk,g=hk,gθ(xk,v)E(θk)1θkθk1Δt+vg, (17)

where vg is the gyroscope measurement noise, and the inverse of the Euler angle rates matrix is given by

E(θ)1=10sinθ0cosϕsinϕcosθ0sinϕcosϕcosθ.

Let the measurements and the sensor models be described collectively again:

zkg=zk,gzk,g,hkg(xk)=hk,gθhk,gω. (18)

The Jacobian matrix is given by

Hk,ghkgx=030303E(δθk)+E(θk)1Δt030303030303I303, (19)

with

E(v)=0v3cosθ0v2sinϕ+v3cosϕcosθv3sinϕsinθ0v2cosϕv3sinϕcosθv3cosϕsinθ0. (20)

δθk=θkθk1 is the finite difference.

3.3.3. Sensor Models for Angular Acceleration

The derivation of the angular acceleration ω˙kB in Equation (13) from accelerometers in Equations (3)–(5) additionally introduces sensor models for angular acceleration. They are associated with the angular acceleration, the angular velocity, and the orientation as

zk,ω˙=hk,ω˙ω˙(x)ω˙kB+vk,ω˙,zk,ω˙=hk,ω˙ω(x)ωkBωk1BΔt+vk,ω˙,zk,ω˙=hk,ω˙θ(x)E(θ)1θk2θk1+θk2Δt2+vk,ω˙. (21)

where vk,ω˙N(0,Σω˙) is a measurement noise for angular acceleration. Let the measurements and the sensor models be described collectively:

zkω˙=zk,ω˙zk,ω˙zk,ω˙,hkω˙(xk)=hk,ω˙θhk,ω˙ωhk,ω˙ω˙. (22)

The Jacobian matrix is obtained by

Hk,ω˙hkω˙(xk)x=030303E(δ2θk)+E(θk)1Δt2030303030303I3Δt030303030303I3 (23)

with E(·) being defined in Equation (20) and δ2θk=θk2θk1+θk2.

While the sensor models have been constructed, the remaining modeling process for angular acceleration is the derivation of the covariance Σω˙ for angular acceleration from a pair of IMUs. We expand the right-hand side of Equation (4) by considering noise on top of the measurement of acceleration and angular velocity:

b+vb=(a2a1)+12ω2×(ω2×r)+12ω1×(ω1×r)+va2+12ω2×(vω2×r)+12vω2×(ω2×r)+va1+12ω1×(vω1×r)+12vω1×(ω1×r)+12vω2×(vω2×r)+12vω1×(vω1×r),

where vb is the noise on vector b, va1 and va1 are the measurement noise of the accelerometer, and vω1 and vω2 are that of the gyroscope. By neglecting the smaller quadratic noise terms above, the covariance can be computed as

ΣbΣa1+Σa2+12C˜1Σω1C˜1T+12C˜2Σω2C˜2T, (24)

where

C˜1=C(ω1×r)+C(ω1)C(r),C˜2=C(ω2×r)+C(ω2)C(r).

C(·) is the cross-product matrix defined in Equation (5).

3.3.4. Sensor Model of Downward Camera

Since the downward camera ultimately derives its pose with respect to the global frame, the sensor model associates the observations with the pose of the sensor suite:

zk,p=hk,p(xk,vk,p)pk+R(θk)TrcB+vk,p,zk,θ=hk,θ(xk,vk,θ)θk+vk,θ, (25)

where rcB is the coordinate of the downward camera in the body frame. Let the measurements and the sensor models be described collectively:

zk,c=pk,cθk,c,hk,c(xk)pk+R(θk)TrcBθk+vk,c, (26)

The Jacobian of the downward camera model is given by

Hk,chk,cx=I30303R(θ)TrcBθ0303030303I30303. (27)

3.4. EKF

The implementation of the EKF for the localization of the sensor suite is relatively straightforward once the motion model and all the sensor models have been developed. Given the motion model (11), the proposed approach predicts the mean and the covariance, xk|k1 and Pk|k1, using its prior belief xk1|k1 and Pk1|k1 as

xk|k1=Fkxk1|k1,Pk|k1=FkPk1|k1Fk+VkΣk,wVk. (28)

The sensor models are each given by a function of some state variables. Accordingly, the proposed approach corrects the estimation through the parallel KF where the correction of each state from each sensor is fused through summation [42]:

xk|k=xk|k1+sSKk,szk,shk,s(xk|k1),Pk|k1=Pk|k11+sSHk,sTΣk,s1Hk,s, (29)

where sS is each sensor, and Σk,s and Hk,s are the covariance of measurement noise of sensor and the Jacobian matrix of the sensor model hk,s(x), respectively. Kalman gain for sensor s is given by

Kk,s=Pk|kHk,sTΣk,s1. (30)

The great advantage of the proposed formulation is the simplicity. While the dimension of each correction is different, the Kalman gain acts not only as a scaling factor but also as a transformation matrix. All the correction terms are thus summed without loss of generality.

4. Sensor Suite Design and Installation

Figure 5a shows the design of the sensor suite with all sensors installed. The stereo cameras are installed on the sensor suite using an orientation-adjustable base to have a better view of the toeboard. The downward camera is installed on a leg welded on the sensor suite. A three-axis gyroscope and a three-axis accelerometer are equipped on each camera to estimate its position and orientation. The frame size is 560mm×450mm×130mm. The size enables the sensor suite to be installed on the mounting bolt holes of the front seat without modifications to the test vehicle structure, as shown in Figure 5b.

Figure 5.

Figure 5

Sensor suite design (a) and installation (b).

The sensor specifications are shown in Table 1. Since the the crash of vehicle test typically lasts 100∼200 ms, high-speed sensors with a wide measurement range are selected.

Table 1.

Parameters of sensors.

Inertial sensors sampling rate 20,000 Hz
Accelerometer noise 0.01m/s2(1σ)
Accelerometer range 2000g
Gyroscope noise noise 0.01/s(1σ)
Gyroscope range 18,000/s
Camera sampling rate 1000Hz
Camera resolution 1024×1024pixels
Downward camera to the ground 95mm
Target toeboard size 500mm×350mm

The sensor suite was designed and tested in a laboratory environment. It was later tested at the crash test center of Honda Research & Development Americas, Inc. (Ohio, USA), in a full-size passenger car. Figure 6a,b show the side and top view of the sensor suite installed on the test vehicle. The stereo camera was placed 400mm away from the toeboard so both cameras could have a full view of it. A hole with size 150mm×150mm was opened on the vehicle floor to install the downward camera, as shown in Figure 6b,c. The downward camera was about 95mm above the ground and had a clear view of the checkerboard pattern. Additional installations of IMU are shown in Figure 6c,d.

Figure 6.

Figure 6

Sensor suite for real car test: (a) side view; (b) top view; (c) bottom view; (d) close-up view of camera and IMU sensors.

5. Experimental Results

This section first demonstrates the localization of the sensor suite using the proposed EKF framework in a simulated environment. Then, results of a real car crash test are followed.

5.1. Sensor Suite Localization in Simulated Environment

Consider the following case where the center of sensor suite undergoes simple linear and angular motion in the global frame as

a=(2πf)2sin(2πft),0,0T,ω=2πfwcos(2πfwt)1,0,0T,

with angular acceleration

ω˙=(2πfw)2sin(2πfwt)T,0,0T.

The measurement of the accelerometer and gyroscope in the body frame shall be

aIMUB=acB+ω˙B×rB+ωB×(ωB×rB)+va,ωB=ω+vg (31)

where acB and ω˙B are measurements at sensor suite center as

acB=a,ω˙B=ω˙

and vgN(0,σg) and vaN(0,σa) model the sensor noise of accelerometer and gyroscope. The pose of downward camera shall be provided by

pCD=p+R(θ)rCD+vc,pθCD=θ+vc,θ, (32)

where the position and orientation of the sensor suite are

p=sin(2πft)2πft2π,0,0T,θ=sin(2πfwt)+2πfwt+2π,0,0T.

Here, R(θ) is the rotation matrix given Euler angle θ, and rCDB is the position of downward camera in the body frame. Sensor noise of camera vcN(0,σc) is Gaussian. IMU data are available between 0.1s0.3s while camera pose information is available between 0.01s0.2s, to be consistent with the real car crash test.

By letting f=fw=10, the angular acceleration can be recovered from the measurement of accelerometers and gyroscope, as illustrated in Figure 7a. This angular acceleration follows the prescribed ω˙B. The amplitude in the x-direction is close to the given amplitude (2πfw)2. The other two directions (ω˙y and ω˙z) are zero-meaned. The estimated velocity is shown in Figure 7b. Linear velocity and angular velocity at x-direction follow the cosine wave, while motion in other directions is trivial. The displacement and roll angles are shown in Figure 7c,d. Figure 7e illustrates the motion of the sensor suite in space, where rectangular panels illustrate the position with a normal vector for the orientation.

Figure 7.

Figure 7

Results of sensor suite localization using EKF framework in a simulated environment. (a) Angular acceleration; (b) linear and angular velocity of the camera fixture; (c) Estimated displacement of the camera fixture; (d) estimated roll angle of the camera fixture; (e) trajectory of the camera fixture in space.

Differential entropy of the estimation is plotted in Figure 8, which is given by

h(x)=12log(2πe)n|Σk|k|

for multivariate Gaussian distribution considered here. The entropy shows that the downward camera pose significantly improves the certainty of the estimation, as shown in Figure 8.

Figure 8.

Figure 8

Entropy of sensor suite localization in simulated environment.

To test the limits of the proposed EKF framework, we vary noise level by σa, σg, or σc individually while keeping the other two at a low level, 0.1% of the maximum value, to be specific. For each combination, 100 tests were performed and the statistics of maximum error of the position maxpk|kpk are plotted in Figure 9, where pk|k is the estimation and pk is the ground truth at step k. The figure shows that the proposed EKF-based estimation is accurate when noise is typically small, say, less than 1%. Then, error can grow significantly when the noise of accelerometers and gyroscope further increases. Note that dead-reckoning error comes from measurements of accelerometer and gyroscope; therefore, noise on the downward camera pose may not significantly influence the estimation if IMU measurements are not significantly noisy.

Figure 9.

Figure 9

Error of EKF estimation under different noise level on accelerometer, gyroscope, and camera measurement.

5.2. Sensor Suite Localization in 56 km/h Frontal Barrier Car Crash Test

5.2.1. Pose Estimation of Downward Camera

Figure 10 shows the results of pose detection of the downward camera using the computer vision technique. Figure 10a shows the checkerboard pattern detected, while Figure 10b shows the features matched between two consecutive images for the purpose of localizing the downward camera. With pixel coordinates in Figure 10a and their global frame coordinates through Equation (7), camera pose is computed and presented in Figure 10c for position and orientation in Figure 10d.

Figure 10.

Figure 10

Results of camera pose estimation using computer vision technique.

5.2.2. Localization of Sensor Suite

For the localization purpose, angular acceleration is first computed by using IMU measurements and is plotted in Figure 11a. The results show that the pitch motion (rotation along y-axes), related to ω˙y, is significantly larger than the other two directions, which is consistent with onsite observation. The state of the sensor suite is then estimated through the proposed EKF framework with the computed pose of downward camera, angular acceleration, and IMU measurements. The results are shown in the rest of Figure 11.

Figure 11.

Figure 11

Frontal barrier localization result. (a) Displacement of the camera fixture. (b) Pitch angle of the camera fixture. (c) Localization of camera fixture in the global frame.

Figure 11b shows the estimated velocity. The frontal crash test has an initial speed of 15.6m/s (56km/h). The results show that, after the crash, the vehicle bounced back and moved powerlessly at a speed around 3.8m/s. Motion in the y- and z-direction is much smaller than the x-direction. This is consistent with the onsite measurement.

Figure 11c shows the position estimated for the sensor suite. Instead of the center, the position of the downward camera is plotted so that results from Section 5.2.1 can be compared. The compression of structures, the distance moved in x-direction after the crash, is about 700mm from the figure. It conforms to the onsite measurement. The figure also shows that the vehicle bounced up about 50mm after the crash (from 33ms to 77ms), and the motion in the z-direction was relatively small compared to the other two directions.

Figure 11d shows the pitch motion during the crash. The estimation shows that the maximum pitch angle is around 8 during the crash. For comparison, we also include the pitching angles measured by the downward camera and the gyroscope installed at the center of the designed sensor suite. The camera measurement is subject to the oscillation of the frame and shows strong oscillations throughout the in-crash measurement. Meanwhile, the integration of gyroscope measurement is subject to the dead-reckoning errors and it is hard to measure the precise pitching angle. Therefore, measurement based on a single sensor may be less accurate than the state estimation technique that fuses multiple sensor measurements. The motion of the sensor suite during the crash is illustrated in Figure 11e, where rectangular panels represent its position and orientation at every other 5ms. On each panel, the normal vector is plotted and scaled by velocity vx to illustrate the orientation and velocity of the sensor suite during the crash test.

6. Conclusions and Ongoing Work

This paper presented a technique to localize a stereo camera for in-crash toeboard deformation measurement. For localizing purposes, we designed a sensor suite with IMU sensors and cameras. This technique recursively estimates the pose of the sensor suite to where the stereo camera is attached. Using an EKF-based framework, the state, including state mean and covariance, is predicted using a motion model and current state. The prediction is then corrected through proposed sensor models and measurements of IMUs and the camera. The state estimation technique can remove the dead-reckoning errors of IMU sensors. Compared to the classical measurement-based technique, the state estimation can remove the measurement error of a single sensor incurred by the excessive oscillation of the sensor suite during a crash test. The proposed technique was first verified in a simulated environment, in which the estimation matched well with the ground truth, and the estimation was reliable given moderate sensor noise. The real crash data were analyzed and provided localization results of the sensor suite as well as of the stereo camera.

This paper mainly focuses on the localization of a sensor suite for toeboard deformation measurement, where the latter is not touched on in this paper but will be reported in future work. We noticed that cameras underwent relative motion to the sensor suite in the real crash test. This may be due to the structural flexibility, and further investigation of the influence of structure flexibility may provide more accurate localization results.

Acknowledgments

The authors would like to thank Shinsuke Shibata and Kaitaro Nambu at Honda Motor Co., Ltd. and Kazunobu Seimiya at Honda Development & Manufacturing of America for their assistance in crash tests. We would like also to thank Mengyu Song for his assistance in image processing.

Author Contributions

Conceptualization, W.Z., T.F., A.N., and T.H.; methodology, W.Z. and T.F.; software, W.Z.; validation, W.Z., T.F., A.N., and T.H.; formal analysis, W.Z.; investigation, W.Z.; resources, W.Z., T.F., A.N., and T.H.; data curation, A.N. and T.H.; writing—original draft preparation, W.Z.; writing—review and editing, T.F.; visualization, W.Z.; supervision, T.F.; project administration, T.F.; funding acquisition, T.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Honda Motor Co., Ltd.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.NHTSA Motor vehicle crashes: Overview. Traffic Saf. Facts Res. Note. 2016;2016:1–9. [Google Scholar]
  • 2.Kim H., Hong S., Hong S., Huh H., Motors K., Kwangmyung Shi K. The evaluation of crashworthiness of vehicles with forming effect; Proceedings of the 4th European LS-DYNA Users Conference; Ulm, Germany. 22–23 May 2003; pp. 25–34. [Google Scholar]
  • 3.Mehdizadeh A., Cai M., Hu Q., Alamdar Yazdi M.A., Mohabbati-Kalejahi N., Vinel A., Rigdon S.E., Davis K.C., Megahed F.M. A review of data analytic applications in road traffic safety. Part 1: Descriptive and predictive modeling. Sensors. 2020;20:1107. doi: 10.3390/s20041107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wei Z., Karimi H.R., Robbersmyr K.G. Analysis of the Relationship between Energy Absorbing Components and Vehicle Crash Response. Volume 4 SAE International; Warrendale, PA, USA: 2016. (SAE Technical Paper Series). [Google Scholar]
  • 5.Górniak A., Matla J., Górniak W., Magdziak-Tokłowicz M., Krakowian K., Zawiślak M., Włostowski R., Cebula J. Influence of a passenger position seating on recline seat on a head injury during a frontal crash. Sensors. 2022;22:2003. doi: 10.3390/s22052003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Varat M.S., Husher S.E. Vehicle crash severity assessment in lateral pole impacts. SAE Trans. 1999;108:302–324. [Google Scholar]
  • 7.Song M., Chen C., Furukawa T., Nakata A., Shibata S. A sensor suite for toeboard three-dimensional deformation measurement during crash. Stapp Car Crash J. 2019;63:331–342. doi: 10.4271/2019-22-0014. [DOI] [PubMed] [Google Scholar]
  • 8.Nakata A., Furukawa T., Shibata S., Hashimoto T. Development of Chronological Measurement Method of Three-Dimensional Toe Board Deformation During Frontal Crash. Trans. Soc. Automot. Eng. Jpn. 2021;52:1131–1136. [Google Scholar]
  • 9.Austin R.A. Lower extremity injuries and intrusion in frontal crashes. Accid. Reconstr. J. 2013;23:1–23. [Google Scholar]
  • 10.Patalak J.P., Stitzel J.D. Evaluation of the effectiveness of toe board energy-absorbing material for foot, ankle, and lower leg injury reduction. Traffic Inj. Prev. 2018;19:195–200. doi: 10.1080/15389588.2017.1354128. [DOI] [PubMed] [Google Scholar]
  • 11.Hu Y., Liu X., Neal-Sturgess C.E., Jiang C.Y. Lower leg injury simulation for EuroNCAP compliance. Int. J. Crashworthiness. 2011;16:275–284. doi: 10.1080/13588265.2011.559797. [DOI] [Google Scholar]
  • 12.Lin C.S., Chou K.D., Yu C.C. Numerical simulation of vehicle crashes. Appl. Mech. Mater. 2014;590:135–143. doi: 10.4028/www.scientific.net/AMM.590.135. [DOI] [Google Scholar]
  • 13.Saha N.K., Wang H.C., El-Achkar R. Frontal offset pole impact simulation of automotive vehicles; Proceedings of the International Computers in Engineering Conference and Exposition; San Francisco, CA, USA. 2–6 August 1992; pp. 203–207. [Google Scholar]
  • 14.Bathe K.J. Crash simulation of cars with finite element analysis. Mech. Eng. Mag. Sel. Artic. 1998;120:82–83. [Google Scholar]
  • 15.Omar T., Eskandarian A., Bedewi N. Vehicle crash modelling using recurrent neural networks. Math. Comput. Model. 1998;28:31–42. doi: 10.1016/S0895-7177(98)00143-5. [DOI] [Google Scholar]
  • 16.Hickey A., Xiao S. Finite Element Modeling and Simulation of Car Crash. Int. J. Mod. Stud. Mech. Eng. 2017;3:1–5. [Google Scholar]
  • 17.Yang R.J., Wang N., Tho C.H., Bobineau J.P., Wang B.P. Metamodeling development for vehicle frontal impact simulation. J. Mech. Des. 2005;127:1014–1020. doi: 10.1115/1.1906264. [DOI] [Google Scholar]
  • 18.Cheng Z.Q., Thacker J.G., Pilkey W.D., Hollowell W.T., Reagan S.W., Sieveka E.M. Experiences in reverse-engineering of a finite element automobile crash model. Finite Elem. Anal. Des. 2001;37:843–860. doi: 10.1016/S0168-874X(01)00071-3. [DOI] [Google Scholar]
  • 19.McClenathan R.V., Nakhla S.S., McCoy R.W., Chou C.C. Use of photogrammetry in extracting 3d structural deformation/dummy occupant movement time history during vehicle crashes. SAE Trans. 2005;1:736–742. [Google Scholar]
  • 20.Zhang X.Y., Jin X.L., Qi W., Sun Y. Key Engineering Materials. Volume 274. Trans Tech Publications Ltd.; Bäch SZ, Switzerland: 2004. Virtual reconstruction of vehicle crash accident based on elastic-plastic deformation of auto-body; pp. 1017–1022. [Google Scholar]
  • 21.Pan B. Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Meas. Sci. Technol. 2018;29:1–33. doi: 10.1088/1361-6501/aac55b. [DOI] [Google Scholar]
  • 22.Ghorbani R., Matta F., Sutton M.A. Full-field deformation measurement and crack mapping on confined masonry walls using digital image correlation. Exp. Mech. 2015;55:227–243. doi: 10.1007/s11340-014-9906-y. [DOI] [Google Scholar]
  • 23.Scaioni M., Feng T., Barazzetti L., Previtali M., Roncella R. Image-based deformation measurement. Appl. Geomat. 2015;7:75–90. doi: 10.1007/s12518-014-0152-x. [DOI] [Google Scholar]
  • 24.Chen F., Chen X., Xie X., Feng X., Yang L. Full-field 3D dimensional measurement using multi-camera digital image correlation system. Opt. Lasers Eng. 2013;51:1044–1052. doi: 10.1016/j.optlaseng.2013.03.001. [DOI] [Google Scholar]
  • 25.Lichtenberger R., Schreier H., Ziegahn K. Non-contacting measurement technology for component safety assessment; Proceedings of the 6th International Symposium and Exhibition on Sophisticated Car Occupant Safety Systems (AIRBAG’02); Karlsruhe, Germany. 6–8 December 2002. [Google Scholar]
  • 26.Sutton M.A., Mingqi C., Peters W.H., Chao Y.J., McNeill S.R. Application of an optimized digital correlation method to planar deformation analysis. Image Vis. Comput. 1986;4:143–150. doi: 10.1016/0262-8856(86)90057-0. [DOI] [Google Scholar]
  • 27.Chu T., Ranson W., Sutton M.A. Applications of digital-image-correlation techniques to experimental mechanics. Exp. Mech. 1985;25:232–244. doi: 10.1007/BF02325092. [DOI] [Google Scholar]
  • 28.De Domenico D., Quattrocchi A., Alizzio D., Montanini R., Urso S., Ricciardi G., Recupero A. Experimental characterization of the FRCM-concrete interface bond behavior assisted by digital image correlation. Sensors. 2021;21:1154. doi: 10.3390/s21041154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Bardakov V.V., Marchenkov A.Y., Poroykov A.Y., Machikhin A.S., Sharikova M.O., Meleshko N.V. Feasibility of digital image correlation for fatigue cracks detection under dynamic loading. Sensors. 2021;21:6457. doi: 10.3390/s21196457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chou J.Y., Chang C.M. Image motion extraction of structures using computer vision techniques: A comparative study. Sensors. 2021;21:6248. doi: 10.3390/s21186248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Xiong B., Zhang Q., Baltazart V. On quadratic interpolation of image cross-correlation for subpixel motion extraction. Sensors. 2022;22:1274. doi: 10.3390/s22031274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Schmidt T.E., Tyson J., Galanulis K., Revilock D.M., Melis M.E. Full-field dynamic deformation and strain measurements using high-speed digital cameras; Proceedings of the 26th International Congress on High-Speed Photography and Photonics; Alexandria, VA, USA. 20–24 September 2004; pp. 174–185. [Google Scholar]
  • 33.Iliopoulos A.P., Michopoulos J., Andrianopoulos N.P. Performance sensitivity analysis of the mesh-free random grid method for whole field strain measurements; Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference; Brooklyn, NY, USA. 3–6 August 2008; pp. 545–555. [Google Scholar]
  • 34.Furukawa T., Pan J.W. Stochastic identification of elastic constants for anisotropic materials. Int. J. Numer. Methods Eng. 2010;81:429–452. doi: 10.1002/nme.2700. [DOI] [Google Scholar]
  • 35.Schweppe F. Recursive state estimation: Unknown but bounded errors and system inputs. IEEE Trans. Autom. Control. 1968;13:22–28. doi: 10.1109/TAC.1968.1098790. [DOI] [Google Scholar]
  • 36.Vargas-Melendez L., Boada B.L., Boada M.J.L., Gauchia A., Diaz V. Sensor fusion based on an integrated neural network and probability density function (PDF) dual Kalman filter for on-line estimation of vehicle parameters and states. Sensors. 2017;17:987. doi: 10.3390/s17050987. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Qu L., Dailey M.N. Vehicle trajectory estimation based on fusion of visual motion features and deep learning. Sensors. 2021;21:7969. doi: 10.3390/s21237969. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Zhang Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1330–1334. doi: 10.1109/34.888718. [DOI] [Google Scholar]
  • 39.Heikkila J., Silvén O. A four-step camera calibration procedure with implicit image correction; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; San Juan, PR, USA. 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  • 40.Bouguet J.Y. Camera Calibration Toolbox for matlab. [(accessed on 7 April 2022)]. Available online: http://robots.stanford.edu/cs223b04/JeanYvesCalib/htmls/links.html.
  • 41.Diebel J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix. 2006;58:1–35. [Google Scholar]
  • 42.Willner D., Chang C.B., Dunn K.P. Kalman Filter Configurations for Multiple Radar Systems. MIT Lexington Lincoln Laboratory; Lexington, MA, USA: 1976. Technical Report. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES