Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2020 Nov 29;20(23):6829. doi: 10.3390/s20236829

Estimating Lower Limb Kinematics Using a Lie Group Constrained Extended Kalman Filter with a Reduced Wearable IMU Count and Distance Measurements

Luke Wicent F Sy 1,*, Nigel H Lovell 1, Stephen J Redmond 2
PMCID: PMC7730686  PMID: 33260386

Abstract

Tracking the kinematics of human movement usually requires the use of equipment that constrains the user within a room (e.g., optical motion capture systems), or requires the use of a conspicuous body-worn measurement system (e.g., inertial measurement units (IMUs) attached to each body segment). This paper presents a novel Lie group constrained extended Kalman filter to estimate lower limb kinematics using IMU and inter-IMU distance measurements in a reduced sensor count configuration. The algorithm iterates through the prediction (kinematic equations), measurement (pelvis height assumption/inter-IMU distance measurements, zero velocity update for feet/ankles, flat-floor assumption for feet/ankles, and covariance limiter), and constraint update (formulation of hinged knee joints and ball-and-socket hip joints). The knee and hip joint angle root-mean-square errors in the sagittal plane for straight walking were 7.6±2.6 and 6.6±2.7, respectively, while the correlation coefficients were 0.95±0.03 and 0.87±0.16, respectively. Furthermore, experiments using simulated inter-IMU distance measurements show that performance improved substantially for dynamic movements, even at large noise levels (σ=0.2 m). However, further validation is recommended with actual distance measurement sensors, such as ultra-wideband ranging sensors.

Keywords: Lie group, constrained extended Kalman filter, gait analysis, motion capture, pose estimation, wearable devices, IMU, distance measurement

1. Introduction

Human pose estimation is the tracking of position and orientation (i.e., pose) of body segments. Joint angles can then be calculated from the relative pose of linked body segments. Applications exist in robotics, virtual reality, animation, and healthcare (e.g., gait analysis). Traditionally, human pose is captured within a laboratory setting using optical motion capture (OMC) systems with up to millimeter level position accuracy [1] when properly configured and calibrated. However, the setup for OMC systems is time consuming and inconvenient (e.g., multiple markers are taped to the body) and requires considerable expertise. Recent miniaturization of inertial measurements units (IMUs) has paved the path toward inertial motion capture (IMC) systems suitable for prolonged use outside of the laboratory. Some examples of clinical applications in the current literature include determining level of spinal damage [2] and Parkinson’s disease diagnosis [3]. Furthermore, developing a comfortable IMC for routine daily use may facilitate interactive rehabilitation [4,5], and allow the study of movement disorder progression to enable predictive diagnostics.

Commercial IMCs attach one sensor per body segment (OSPS) [6], which may be considered too cumbersome and expensive for routine daily use by a consumer due to the number of IMUs required. Similar works also exist in the literature using quaternions [7], Kalman filters (KF) [8], and particle filters [9]. Each IMU typically tracks the orientation of the attached body segment using an orientation estimation algorithm (e.g., [10,11]), which is then connected via linked kinematic chain, usually rooted at the pelvis. A reduced-sensor-count (RSC) configuration, where IMUs are placed on a subset of body segments, can improve user comfort, reduce setup time and system cost. However, using fewer wearable sensor units necessarily reduces the kinematic information available, which must otherwise be inferred from (i) our knowledge of human movement (e.g., enforcing mechanical joint constraints or making dynamic balance assumptions), or (ii) by using additional sensing modalities within each wearable sensor unit. Each approach will be described in the next subsections.

1.1. Existing Algorithms for Human Motion Tracking Using Fewer IMUs

The performance of algorithms with RSC configuration depends (i) on how the kinematic information of uninstrumented body segments is inferred and (ii) on how body pose is represented.

The kinematic information of body segments which do not have sensors attached to them may be inferred by the algorithm, either through data obtained in the past (i.e., observed correlations between co-movement of different body segments or data-driven approaches) or by using a simplified kinematic model of the human body (i.e., model-based approaches). Data-driven approaches (e.g., nearest-neighbor search [12] and bi-directional recurrent neural network [13]) are able to recreate realistic motion suitable for animation-related applications. However, these approaches are normally biased toward motions already contained in the database, which may limit their use in monitoring pathological gait. Model-based approaches reconstruct human movement using kinematic and biomechanical models (e.g., linear regression [14], constrained KF [15], extended KF (EKF) [16], particle filter [9], and window-based optimization [17]). The use of optimization-based estimators is sometimes favoured due to its relative ease to setup and ease of understanding. However, the algorithm can be very inefficient when tracking a larger number of dimensions (e.g., when tracking body pose over a long duration). To significantly increase the efficiency of the algorithm when estimating body pose across time, a recursive estimator can take advantage of the state substructure and reduce the state dimension being tracked [18].

Body poses are usually represented using Euler angles or quaternions [9,16]. However, recent work on pose estimation has shown that using a Lie group to represent the states of recursive estimator is a promising approach. Such algorithms typically represent the body pose as a chain of linked segments using matrix Lie groups to represent the orientation or pose of each body segment; specifically the special orthogonal group, SO(n), and special Euclidean group, SE(n), where n=2,3, are the spatial dimensions for humam body kinematics problems. The early works of Wang et al. [19] and Barfoot et al. [20] investigated representations and propagation of pose uncertainty, the former in the context of manipulator kinematics and the latter focused on SE(3), followed by the formulation of recursive estimators using Lie group representation (e.g., EKF [21] and unscented KF (UKF) [22]). Recent literature has reported the use of Lie group based recursive estimators to estimate human movement. Cesic et al. estimated the full body pose from OMC marker measurements and achieved significant improvements compared to an Euler angle representation [23]; and even supplemented the approach with an observability analysis [24]. Joukov et al. represented the human body using SO(n), tracking the pose using measurements from IMUs under an OSPS configuration [25]. Joukov et al.’s algorithm was tested using an arm tracking experiment, where the results improved especially during arm poses that cause a singularity when using an Euler angle representation (i.e., Lie group representation is singularity free).

1.2. Improving Human Pose Estimation Using Additional Sensor Measurements

Another approach is to supplement kinematic information from the IMU with another kind of sensor, which inherently increases cost and reduces battery life. Note that we will focus on systems that supplement pose estimation, not on the global position estimation of the subject (e.g., [26]). For example, IMCs can be supplemented with standard video cameras (e.g., fused using an optimization-based algorithm [27], and deep neural networks [28]) or depth cameras [29] at fixed locations in the capture environment, external to the subject. The combination of IMCs and portable cameras solves a weakness of OMCs (i.e., marker or body segment occlusion) and a weakness of IMCs (i.e., global position drift). However, the system still requires an external sensor that is carried by another person or requires some quick setup. IMCs can also be supplemented by distance measurements (using ultrasonic devices and KF in OSPS configuration [30], using constrained KF in RSC configuration [31]), removing dependence on any external non-body-worn sensor.

1.3. Novelty

This paper describes a novel human pose estimator that represents the state using Lie groups with the state propagated iteratively using a constrained EKF (CEKF) to estimate lower body kinematics for an RSC configuration of IMUs and inter-IMU distance measurements; the Lie group framework and inclusion of inter-IMU distance measurements, along with the exploration of its effect on pose estimation accuracy, are the major advancements made in this paper. It extends the work of [32] and builds on prior work of [15,31], but instead represents the state variables as elements of Lie groups, specifically SE(3), to track both position and orientation (whereas [15] only tracks position and assumes orientation measurements are noise-free). Furthermore, this paper presents in detail a novel Lie group formulation for assumptions typically used in human pose estimation (e.g., zero velocity update, constant body segment lengths, and a hinged knee joint). While not our focus here, our algorithm, with its SE(3) representation, is able to track the global position of body segments, taking into account IMU measurements during the prediction step, and a simpler implementation of certain measurement assumptions (e.g., zero velocity update), though at the expense of having an additional constraint step to ensure satisfaction of biomechanical constraints. The algorithm design was motivated by the need for a body pose representation that more closely models the human biomechanical system (without a dramatic increase in the dimensions of the tracked state) from which the missing kinematic information of uninstrumented body segments are inferred. The contributions of this paper advance the development of gait assessment tools for comfortable and long-term monitoring of lower body movement.

2. Mathematical Background: Lie Group and Lie Algebra

The matrix Lie group G is a group of n×n matrices (e.g., SE(3)). Mathematically speaking, it is also a smooth manifold with smooth group composition and inversion (i.e., matrix multiplication and inversion). The Lie algebra g can be represented in the vector space and is closely related to Lie group G. It represents a tangent space of a group at the identity element [33]. The elegance of Lie theory lies in its ability to represent pose using a vector space (e.g., Lie group G is represented by g) without additional constraints (e.g., without requiring RTR=I which is using a rotation matrix representation of orientation, or ||q||=1 which is using a quaternion representation of orientation) [34].

The matrix exponential expG:gG (Equation (1)) and matrix logarithm logG:Gg relates (i.e., local diffeomorphism) the Lie group G and its Lie algebra g. The Lie algebra g is a n×n matrix that can be represented compactly in an n-dimensional vector space. A linear isomorphism (i.e., one-to-one mapping) between g and Rn is given by operators G:gRn and G:Rng, which map between the compact and matrix representation of the Lie algebra g. Figure 1 shows an illustration of the said mappings.

Figure 1.

Figure 1

Overview of Lie group theory mappings. When G=SE(3), Lie group X=T is a 4×4 transformation matrix representing pose (i.e., 3D rotation and translation). Similarly, v=ξ where Lie algebra [ξ]SE(3)se(3) and the vector ξRn with n=6.

Furthermore, the adjoint operator of a Lie group, AdGX, the adjoint operator of a Lie algebra, adGv, and the right jacobian, ΦGv (Equation (2)), where XG and vGg will be used in later sections. Multiplying an n-dimensional vector representation of a Lie algebra with AdG(X)Rn×n (i.e., the product AdG(X)v) transforms the vector from one coordinate frame to another, similar to how rotation matrices transform points from one frame to another. adG(v) is the Lie algebra of AdG(X).A summary of the operators for Lie groups SO(3), SE(3), and Rn will be explained in the next subsections. They will serve as building blocks to implement the algorithm being described by this paper. For a more detailed introduction to Lie groups refer to [18,34,35].

exp([v]G)=n=01n!([v]G)n (1)
ΦG(v)=i=0(1)i(i+1)!adG(v)i,vRn (2)

2.1. Special Orthogonal Group SO(3)

The special orthogonal group, SO(3):=RR3×3|RRT=1,detR=1, represents orientation, where R is the typical 3×3 rotation matrix whose column vectors represent the x, y, and z basis vectors. The operations for SO(3) are listed below, and will serve as building blocks for SE(3), which will be described in the next subsection. Note that [a]SO(3)b is equivalent to the cross product of a and b. See Chapter 7 of [18] for details.

ϕSO(3)=ϕ1ϕ2ϕ3SO(3)=0ϕ3ϕ2ϕ30ϕ1ϕ2ϕ10,0ϕ3ϕ2ϕ30ϕ1ϕ2ϕ10SO(3)=ϕ1ϕ2ϕ3=ϕ (3)

If ϕ/|ϕ| represents a unit vector axis we wish to rotate around, and |ϕ| is the angle by which we wish to rotate, then the rotation matrix, R, which will implement this rotation is given by Equation (4), which is also known as the Rodrigues’ axis-angle rotation formula. When ϕ is very small, RI3×3+[ϕ]SO(3).

R=expϕSO(3)=cos|ϕ|I3×3+(1cos|ϕ|)ϕϕT|ϕ|2+sin|ϕ|ϕ|ϕ|SO(3) (4)

Furthermore, the Lie algebra adjoint, Lie group adjoint, and inverse operators are listed in Equation (5).

adSO(3)ϕ=ϕSO(3),AdSO(3)R=R,R1=RT (5)

Lastly, to approximate the compound rotations, R1R2, in the Lie algebra space where R1=exp([ϕ1]SO(3)) and R2=exp([ϕ2]SO(3)), we can use Equation (6). The right Jacobian, ΦSO(3)(ϕ)R3×3, is obtained using Equation (7).

[log(R1R2)]SO(3)ϕ1+ΦSO(3)(ϕ1)1ϕ2so(3) (6)
ΦSO(3)(ϕ)=sin(|ϕ|)|ϕ|I3×3+1sin(|ϕ|)|ϕ|ϕϕT|ϕ|21cos(|ϕ|)|ϕ|ϕ|ϕ|SO(3)R3×3 (7)

2.2. Special Euclidean Group, SE(3)

The special Euclidean group, SE(3):=T=Rt0T1R4×4|R,tSO(3)×R3, represents orientation and translation, where T is the typical 4×4 transformation matrix, R is the rotation matrix, and t represents a coordinate point in Euclidean space. The operations for SE(3) are listed below. Ii×i and 0i×j denote i×i identity and i×j zero matrices. See Chapter 7 of [18] for details.

ξSE(3)=ρϕSE(3)=ϕSO(3)ρ01×30,ϕSO(3)ρ01×30SE(3)=ρϕ (8)
T=exp([ξ]SE(3))=exp([ϕ]SO(3))ΦSO(3)(ϕ)ρ01×31=Rt01×31 (9)
adSE(3)ξ=ϕSO(3)ρSO(3)03×3ϕSO(3),AdSE(3)T=R[ρ]SO(3)R03×3R,T1=RTRTρ01×31 (10)

Lastly, we note the useful identity defined in Equation (11) where [a]SE(3),[b]SE(3)se(3) which is the Lie algebra of the Lie Group SE(3) ([18], Equation (72)), which will be used to compute the Jacobians of our model later.

aSE(3)b=bSE(3)a,whereb=ϵη,[b]=ηI3×3ϵSO(3)01×301×3,ϵR3,ηR (11)

2.3. Real Numbers Rn

In order to represent vector state variables (e.g., translation, velocity, and acceleration) and be consistent with how we used SE(3) to represent pose, we represented the real numbers sRn as SE(n) poses with position and no rotation. The operations for Rn are listed below.

sRn=0n×ns01×n0,0n×ns01×n0Rn=s (12)
S=exp([s]Rn)=In×ns01×n1,logIn×ns01×n1Rn=s,exp([s]Rn)1=In×ns01×n1 (13)
adRns=0n×n,AdRnS=In×n,ΦRn(s)=0n×n (14)

Note that the multiplication of two elements of the Lie group (i.e., the exponential of s1 and s2) is equivalent to the vector addition of two elements of the Lie algebra (i.e., s1+s2).

logexp([s1]Rn)exp([s2]Rn)Rn=s1+s2 (15)

3. Algorithm Description

The proposed algorithm, L5S-3IMU, uses a similar model and assumptions to our prior works in [15,31], denoted as CKF-3IMU and CKF-3IMU+D, albeit expressed in Lie group representation. The algorithm uses measurements from three IMUs attached at the pelvis (sacrum) and shanks (slightly above the ankles), and the inter-IMU distance measurements to estimate the orientation of five body segments (i.e., pelvis, thighs, and shanks) with respect the world frame, W (Figure 2). The Lie group representation enables the tracking of both position and orientation (note that CKF-3IMU only tracked position and assumed orientation measurements were noise-free). Figure 3 shows an overview of the proposed algorithm. L5S-3IMU predicts the ankle and pelvis positions through double integration of their linear 3D acceleration, and predicts the shank and pelvis orientation through integration of their linear 3D angular velocity. The orientation estimates are further updated using a third-party orientation estimation algorithm. Drift in the position estimates due to sensor noise, which accumulates quadratically in the double integration of acceleration, was mitigated through the following assumptions: (1) the pelvis position is approximated as the height of the pelvis when the leg(s) are unbent, or as informed by inter-IMU distance measurements, when available; and (2) the ankle velocity and height above the floor are zeroed whenever a footstep is detected. Furthermore, a pseudo-measurement equal to the current global position estimate of the pelvis and ankles is made with a fixed covariance to contain the ever-growing error covariance of the said states. Lastly, constant body segment lengths and hinged knee joints (one degree of freedom (DOF)) with limited range of motion (ROM) were enforced as biomechanical constraints. The pre- and post-processing parts remain exactly the same as the CKF-3IMU algorithm (e.g., acceleration due to body movement was calculated by expressing the acceleration of the instrumented body segment in the world frame using the orientation estimate and then subtracting acceleration due to gravity (i.e., g=[009.81]T) [6]).

Figure 2.

Figure 2

Model of the lower body used by LGKF-3IMU. The circles denote joint positions, the solid lines denote instrumented body segments, whilst the dashed lines denote segments without IMUs attached (i.e., thighs). Dotted lines denote inter-IMU measurements.

Figure 3.

Figure 3

Algorithm overview which consists of pre-processing, CEKF, and post-processing. Pre-processing calculates the body segment orientation, inertial body acceleration (calculated by expressing IMU acceleration with respect world frame using its current orientation estimate then subtracting gravity), and step detection from raw acceleration, a˘k, angular velocity, ω˘k, and magnetic north heading, η˘k, measured by the IMU. The CEKF-based state estimation consists of a prediction (kinematic equation), measurement (orientation, pelvis height/inter-IMU distance measurement, covariance limiter, intermittent zero-velocity update, and flat-floor assumption), and constraint update (thigh length, hinge knee joint, and knee range of motion). Post-processing calculates the left and right thigh orientations, Rklt and Rkrt.

3.1. System, Measurement, and Constraint Models

The system, measurement, and constraint models are presented below

Xk=f(Xk1,uk,nk)=Xk1exp([Ω(Xk1,uk)+nk]G) (16)
Zk=h(Xk)exp([mk]G),Dk=c(Xk) (17)

where k is the time step. XkG is the system state, an element of state Lie group G. ΩXk1,uk:GRp is a non-linear function which describes how the model acts on the state and input, uk1, where p is the number of dimensions of the compact vector representation for Lie algebra g. nk is a zero-mean process noise vector with covariance matrix Q (i.e., nkNRp(0p×1,Q)). ZkGm is the system measurement, an element of the measurement Lie group Gm. hXk:GGm is the measurement function. mk is a zero-mean measurement noise vector with covariance matrix Rk (i.e., mkNRq(0q×1,Rk) where q is the number of dimensions of available measurements). DkGc is the constraint state, an element of constraint Lie group Gc. cXk:GGc is the equality constraint function the state Xk must satisfy (i.e., cXk=Dk). Similar to [23,36], the state distribution of Xk is assumed to be a concentrated Gaussian distribution on Lie groups (i.e., Xk=μkexpGϵG, where μk is the mean of Xk and Lie algebra error ϵNRp(0p×1,P)) [19].

The Lie group state variables Xk model the position, orientation, and velocity of the three instrumented body segments (i.e., pelvis and shanks) as Xk=diag(Tp,Tls,Trs,exp([[(vp)T(vls)T(vrs)T]T]R9))G=SE(3)3×R9 where TbSE(3) represents the pose (i.e., orientation and position) of body segment b relative to world frame W, and vAb is the velocity of body segment b relative to frame A. If frame A is not specified, assume reference to the world frame, W. The Lie algebra error is denoted as ϵ=[(ϵTp)T(ϵTls)T(ϵTrs)T(ϵvmp)T(ϵvla)T(ϵvra)T]T where the first three variables correspond to the Lie group in SE(3) while the latter three are for R9. []G, exp([]G), [log()]G, AdG(Xk), and ΦG() are constructed similarly as Xk(e.g., AdG(Xk)=diag(AdSE(3)(Tp),AdSE(3)(Tls),AdSE(3)(Trs),AdR9(exp([[(vp)T(vls)T(vrs)T]T]R9))). Refer to Section 2.2 and Section 2.3 for definition of SE(3) and Rn operators.

3.2. Lie Group Constrained EKF (LG-CEKF)

The a priori, a posteriori, and constrained state estimate (i.e., estimated mean of Xk) for time step k are denoted by μ^k, μ^k+, and μ˜k+, respectively. Note that the true state Xk can be expressed as μkexp([ϵ]G) where μk is one of the state means just mentioned with error, [ϵ]G. The a priori and a posteriori error covariance matrices are denoted as Pk and Pk+, respectively. Note the error covariance is not updated at the constrain update step. The KF is based on the Lie group EKF, as defined in [36], where the state means (μ^k, μ^k+, and μ˜k+) and state error covariance matrices (Pk and Pk+) are propagated by the KF at each time step.

3.2.1. Prediction Step

Prediction step estimates the a priori state μ^k at the next time step. The mean propagation of the three instrumented body segments is governed by Equation (18) where Ω(μ˜k1+,uk) (Equation (19)) is the motion model for the three instrumented body segments. Note that the state propagation may not necessarily respect the biomechanical constraints, so joints may become dislocated after this step. The input uk is defined in Equation (20), where the orientation and acceleration as obtained by the IMU attached to segment b with respect world frame W are denoted as R˘kb and a˘kb for b{p,ls,rs}, while the angular velocity as obtained by the IMU attached to segment b expressed in frame b is denoted as ω˘bk.

μ^k=μ˜k1+exp([Ω(μ˜k1+,uk)]G) (18)
Ω(μ˜k1+,uk)=[(Δtv˜k1mp++Δt22a˘kp)TR˘kp1×3 Δtω˘pkT1×3 (Δtv˜k1la++Δt22a˘kls)TR˘kls1×3 Δtω˘lskT1×3(Δtv˜k1ra++Δt22a˘krs)TR˘krs1×3 Δtω˘rskT1×3 Δt(a˘kmp)TΔt(a˘kla)TΔt(a˘kra)T]T1×9 (19)
uk=R˘kpR˘klsR˘krsa˘kpa˘klsa˘krsω˘pkω˘lskω˘rsk (20)

The state error covariance matrix propagation is governed by Equation (21), where Fk represents the matrix Lie group equivalent to the Jacobian of f(Xk1,nk1), Q is the covariance matrix of the process noise, and Ck=ϵΩ(μ˜k1+exp([ϵ]G),uk)|ϵ=0 represents the linearization of the motion model with an infinitesimal perturbation ϵ. The process noise covariance matrix, Q, is calculated from the input-to-state matrix G and the noise variances of the measured acceleration and angular velocity, σa2 and σω2, respectively.

Pk=FkPk1+FkT+ΦG(Ω(μ˜k1+,uk))QΦG(Ω(μ˜k1+,uk))T (21)
Fk=AdG(expG([Ω(μ˜k1+,uk)]G))+ΦG(Ω(μ˜k1+,uk))Ck (22)
Q=Gdiag(σa2,σω2)GT (23)
graphic file with name sensors-20-06829-i001.jpg (24)
G=Δt2/2I3×303×303×303×303×303×303×303×303×3ΔtI3×303×303×303×3Δt2/2I3×303×303×303×303×303×303×303×303×3ΔtI3×303×303×303×3Δt2/2I3×303×303×303×303×303×303×303×303×3ΔtI3×3ΔtI9×909×9 (25)

3.2.2. Measurement Update

Measurement update estimates the state at the next time step by: (i) updating the orientation state using new orientation measurements of body segments from IMUs; by (ii) encouraging pelvis position to be above the feet, as informed by either some pseudo-measurement or inter-IMU distance measurements; and by (iii) enforcing ankle velocity to reach zero, and the ankle z position to be near the floor level, zf when step is detected. When only IMU measurements are available, (iia) pelvis z position is encouraged to be close to initial standing height, zp. When inter-IMU distance measurements are available, (iia) is not used. Instead, (iib) ankle distance is directly incorporated while pelvis position is inferred from inter-IMU distance measurements assuming hinged knee joints and constant body segment lengths. The a posteriori state mean μ^k+ is calculated following the Lie EKF equations below. Note that [log(h(μ^k)1Zk)]Gm in Equation (27) is akin to the KF innovation/residual, where h(μ^k)1Zk (derived from Equation (17) assuming mk=0 and Xk=μ^k, i.e., Zk=h(μ^k)) is the innovation/residual in Lie group Gm brought to the vector representation of the Lie algebra space using the inverse exponential (i.e., logarithm) mapping.

μ^k+=μ^kexpG(νkG) (26)
νk=Kk([log(h(μ^k)1Zk)]Gm) (27)
Kk=PkHkT(HkPkHkT+Rk)1 (28)

Hk can be seen as the matrix Lie group equivalent to the Jacobian of h(Xk), and is defined as the concatenation of Hori and Hmp,k when inter-IMU distance measurement is not available. When inter-IMU distance measurement is available, Hmp,k is replaced by Hdist,k=[Hpla,kTHpra,kTHlra,kT]T. Hls,k and/or Hrs,k are also concatenated to Hk when the left and/or right foot contact (FC) is detected (See Equation (9) of [15]). Each component matrix will be described later. The measurement matrix ZkGm, measurement function h(Xk)Gm, and measurement covariance noise Rk are constructed similarly to Hk, but combined using diag instead of concatenation (e.g., Rk=diag(σori2,σmp2)).

Hk=ϵlogh(μ^k)1h(μ^kexp([ϵ]G))Gm|ϵ=0=[HoriTHmp/distT]TnoFC[HoriTHmp/distTHls,kT]TleftFC[HoriTHmp/distTHrs,kT]TrightFC[HoriTHmp/distTHls,kTHrs,kT]TbothFC (29)
Orientation Update

The orientation update utilizes the orientation measurement to update the state estimate as defined by Equation (30), with measurement noise variance σori2 (9×1 vector).

hori(Xk)=diag(Rkp,Rkls,Rkrs)SO(3)3,Zori=diag(R˘kp,R˘kls,R˘krs) (30)

Hori along with other components of Hk are calculated by applying Equation (29) to their corresponding measurement functions, followed by tedious algebraic manipulation and first order linearization (i.e., exp([ϵ])I+[ϵ]). The derivation for Hori (Equation (31)) can be solved trivially as [log(hori(μ^k)1hori(μ^kexp([ϵ]G)))]=[(ϵϕp)T(ϵϕls)T(ϵϕrs)T]T, where ϵTb=[(ϵρb)T(ϵϕb)T]T for body segment b{p,ls,rs}.

Hori=03×3I3×303×3I3×309×903×3I3×3 (31)
Pelvis Height Assumption

The pelvis height assumption softly constrains the pelvis z position to be close to initial standing height zp as defined by Equation (32) (represented in vector space of its Lie algebra) and Equation (33), with measurement noise variance σmp2 (1×1 vector). This assumption is used only when inter-IMU distance measurement is not available. ix, iy, iz, and i0 denote 4×1 vectors whose 1st to 4th rows, respectively, are 1, while the rest are 0; they are used below to select rows, columns, or elements from matrices.

[log(hmp(Xk))]=izTTkpi0=0010Rkppkp01×310001=0010pkp1=pz,kpR (32)
log(Zmp)]=zpR (33)

The derivation of Hmp,k=ϵ[log(hmp(μ^k)1hmp(μ^kexp([ϵ]G)))]|ϵ=0 is shown in Equations (34)–(36). Taking best estimate Xk=μ^k gives us Equation (34).

[log(hmp(μ^k))]=izTT^kpi0 (34)
[log(hmp(μ^kexp([ϵ]G)))]=izTT^kpexp([ϵTp])i0izTT^kpi0+izTT^kp[ϵTp]i0,1storderlinearizationUseEquation(11),ab=ba,tobringϵTptorightofi0=[log(hmp(μ^k))]+izTT^kp[i0]ϵTp (35)

Remember ϵTp is a subvector of ϵ as defined in Section 3.1 and is the Lie algebra error of the state in its compact vector representation. Note that if measurement function ha(Xk) Lie group Rb, then [log(ha(μ^k)1ha(Xk))]=[log(ha(Xk))][log(ha(μ^k))]=[log(ha(μ^kexp([ϵ]G)))][log(ha(μ^k))] by applying Equations (15) and (13) (inverse of Lie group Rn). Finally, Hmp,k is calculated as shown in Equation (36).

graphic file with name sensors-20-06829-i002.jpg (36)
Zero Velocity Update and Flat Floor Assumption

When step is detected, the ankle velocity is enforced to be zero and the ankle z position is brought to near the floor level, zf (i.e., flat floor assumptions). The corresponding measurement function is defined by Equation (37), with measurement noise variance σls2 (4×1 vector).

[log(hls(Xk))]=vlsizTTklsi0=vlspz,klsR4,[log(Zls)]=03×1zf (37)

The zero velocity part of Hls,k (Equation (38)) and Hrs,k can also be calculated trivially, while the flat floor assumption can be calculated similarly as Hmp,k but the z position set to floor height, zf, instead of the pelvis standing height, zp.

graphic file with name sensors-20-06829-i003.jpg (38)
Left and Right Ankle Distance Measurement

When the inter-IMU distance between the ankles, d˘klra, is available, ankle distance measurement is incorporated as a soft distance constraint. The measurement function is defined by Equation (40), with measurement noise variance σlra2 (1×1 vector). τlra(Xk) (Equation (39)) is the vector that points from the right ankle to the left ankle, where plsla is the position of the left ankle expressed in left shank frame, and prsra is the position of the right ankle expressed in right shank frame. We have chosen that the ankles are at the origin of their respective shank frames. Note that matrix E converts homogeneous 4×1 coordinates to standard 3×1 coordinates (i.e., drops the 1 from the end of the 4×1 vector).

τlra(Xk)=I3×303×1E(TklsplslaleftankleinWTkrsprsrarightankleinW),plsla=prsra=0001Toriginofframe (39)

By taking the squared Euclidean distance of τlra(Xk) (i.e., ||τlra(Xk)||2), we can get the ankle distance measurement model.

[log(hlra(Xk))]=(τlra(Xk))Tτlra(Xk)R,[log(Zlra)]=(d˘klra)2 (40)

To solve for Hlra,k (Equation (44)), we first solved for [log(hlra(Xk))] at Xk=μ^k (Equation (41)).

τlra(μ^k)=E(T^klsplslaT^krsprsra),[log(hlra(μ^k))]=(τlra(μ^k))Tτlra(μ^k) (41)

Then solve for τlra(μ^kexp([ϵ]G)) and [log(hlra(μ^kexp([ϵ]G)))] as shown in Equations (42) and (43).

τlra(μ^kexp([ϵ]G))=E(T^klsexp([ϵTls])plslaT^krsexp([ϵTrs])prsra)Takethe1storderapproximationE(T^klsplslaT^krsprsra+T^kls[ϵTls]plslaT^krs[ϵTrs]prsra)=τlra(μ^k)+ET^kls[plsla]ϵTlsT^krs[prsra]ϵTrsΓlra,UsingEquation(11) (42)
graphic file with name sensors-20-06829-i004.jpg (43)
graphic file with name sensors-20-06829-i005.jpg (44)
Pelvis-to-Ankle Distance Measurement

In addition to the soft ankle distance constraint, the ankle to pelvis vector is inferred from the ankle to pelvis distance measurements while assuming hinged knee joints and constant body segment lengths. The measurement function is defined by Equation (45), with measurement noise variance σpla2 (3×1 vector), where ppmp is the position of the mid-pelvis expressed in pelvis frame, and plsla is the position of the left ankle expressed in left shank frame. We have chosen that the mid-pelvis and ankle are at the origin of their corresponding reference frames.

[log(hpla(Xk))]=E(Tkpppmpmid-pelvisinWTklsplslaleftankleinW)R3,ppmp=plsla=0001T (45)

The measurement pelvis to left ankle vector can be calculated from the measured pelvis to left ankle distance, d˘kpla as shown in Equation (46) which is the Lie Group reformulation of [31] (Equation (4)). In essence, Equation (47) calculates the most probably knee angle assuming hinged knee joint and constant body segment lengths, then Equation (46) adds the thigh (expressed in shank coordinate system with knee angle θ^klk) and shank long axis to the hips to obtain the pelvis-to-ankle vector. See Appendix A for derivation. There are two solutions for θ^klk due to the inverse cosine in Equation (47). We chose the θ^klk value as that closer to the current left knee angle estimate from the prediction step. Note that this measurement function could also be formulated as a linearized Euclidean distance between the pelvis and ankle (i.e., similar to Equation (44)); however, a preliminary exploration of this approach showed poorer performance.

[log(Zpla,k)]=dp2T^kpiydlsT^klsizψpla=halfpelvisy-axis+shankz-axis+dltT^kls(ixsin(θ^klk)izcos(θ^klk))thighz-axisinshankframeR3 (46)
θ^klk=cos1αγ±βα2+β2γ2α2+β2whereα=2dltψplaTT^klsiz,β=2dltψplaTT^klsix,γ=(d˘kpla)2ψplaTψpla(dlt)2 (47)

To calculate for Hpla,k, we first solved for [log(hpla(Xk))] at Xk=μ^k similar to Equation (41).

[log(hpla(μ^k))]=τpla(μ^k)=E(T^kpppmpT^klsplsla) (48)

Then solve for [log(hpla(μ^kexp([ϵ]G)))] similar to Equation (42) (i.e., distance between mid-pelvis and left ankle) giving us [log(hpla(μ^kexp([ϵ]G)))]=τpla(μ^k)+Γpla. Hpla,k is then calculated as shown in Equation (49). The right side of the pelvis-to-ankle distance measurement (i.e., hpra(μ^k), Zpra, Hpra,k) can be solved similarly to the left side.

graphic file with name sensors-20-06829-i006.jpg (49)
Covariance Limiter

Lastly, the error covariance of the position estimates of the three instrumented body segments must be prevented from growing unbounded and/or becoming badly conditioned, as will occur naturally when tracking global position of objects without any global position reference. At this step, a pseudo-measurement equal to the current state μ^k+ is used (implemented by Equation (50)) with some measurement noise of variance σlim (9×1 vector). The covariance Pk+ is then calculated through Equations (51)–(53).

graphic file with name sensors-20-06829-i007.jpg (50)
Hk=[HkTHlimT]T,Rk=diag(σk2,σlim2) (51)
Kk=PkHkT(HkPkHkT+R)1 (52)
Pk+=ΦGνk(IKkHk)PkΦGνkT (53)

3.2.3. Satisfying Biomechanical Constraints

After the preceding updates, the joint positions or angles may be beyond their allowed range (i.e., knee hyperflexion). The constraint update corrects the kinematic state estimates to satisfy the biomechanical constraints of the human body by projecting the current a posteriori state estimate μ^k+ onto the constraint surface, guided by our uncertainty in each state variable, which is encoded by Pk+. The following biomechanical constraint equations are enforced: (i) estimated thigh long axis vector lengths equal the thigh lengths; (ii) both knees act as hinge joints (formulation similar to Section 2.3, Equation (4) of [9]); and (iii) the knee joint angle is within realistic range. The constraint functions are similar to Section II-E.3 of [15] but expressed under SE(3) state variables. The constrained state μ˜k+ can be calculated using the equations below, similar to the measurement update of [36] with zero noise, where Ck=[CL,kTCR,kT]T. CL,k is the concatenation of Cltl,k, Clkh,k, and Clkr,k; the last matrix is not concatenated when the knee angle, αlk, is within its allowed range (i.e., αlk,minαlkαlk,max). Cltl,k, Clkh,k, and Clkr,k corresponds to the biomechanical constraint for the left thigh length (ltl), left knee hinged joint (lkh), and left knee angle ROM (lkr), respectively, which will be described more later. CR,k can be derived similarly, while Dk and c(μ^k+)) are constructed similarly to Zk.

μ˜k+=μ^k+exp([νk]G) (54)
νk=Kk([log(c(μ^k+)1Dk)]Gc (55)
Kk=Pk+CkT(CkPk+CkT)1) (56)
Ck=ϵ[log(c(μ^k+)1c(μ^k+exp([ϵ]G)))]Gc|ϵ=0 (57)
Thigh Length Constraint

Firstly, the thigh length constraint is shown in Equation (59), where τzlt(Xk) (Equation (58)) denotes the thigh long axis vector and dlt denotes the measured thigh length during calibration. pplh is the position of the left hip expressed in pelvis frame, and plslk is the position of the left knee expressed in left shank frame. We have chosen that the left hip to be dp2 to the left of the mid-pelvis origin, and the left knee to be dls from the left shank origin (i.e., from the left ankle).

τzlt(Xk)=E(Tppplhhipjt.pos.inWTlsplslkkneejt.pos.inW),pplh=0dp201T,plslk=00dls1T (58)
log(cltl(Xk))]=(τzlt(Xk))Tτzlt(Xk)R,[log(Dltl)]=(dlt)2 (59)

Cltl,k is calculated using Equation (60).

Cltl,k=ϵ[log(cltl(μ^k+)1cltl(μ^k+exp([ϵ]G)))]|ϵ=0=ϵ[log(cltl(μ^k+exp([ϵ]G)))][log(cltl(μ^k+))]|ϵ=0 (60)

Following similar procedure to Hlra,k, we obtain τzlt(μ^k+exp([ϵ]G))=τzlt(μ^k+)+Γltz (similar to Equation (42)), and [log(cltl(μ^k+exp([ϵ]G)))]=[log(cltl(μ^k+))]+2(τzlt(μ^k+))TE(T^kp+[pplh]ϵTpT^kls+[plslk]ϵTls) (similar to Equation (43)), which if we substitute in Equation (60) gives us Equation (61)

graphic file with name sensors-20-06829-i008.jpg (61)
Hinge Knee Joint Constraint

Secondly, the hinge knee joint constraint as defined by Equation (62) is enforced by having the long (z) axis of the thigh to be perpendicular to the mediolateral axis (y) of the shank. For example, on the left leg, we would want ryls be perpendicular to the thigh long axis vector τzlt(μ^k+) (i.e., the dot product of ryls and τzlt(μ^k+) should be 0). Refer to Figure 2 for visualization. This formulation is similar to Section 2.3, Equation (4) of [9].

[log(clkh(Xk))]=(ETlsiy)Tτzlt(Xk)=(ryls)Tτzlt(Xk)R,[log(Dlkh)]=0 (62)

Following similar procedure to Cltl,k and taking Xk=μ^k+, [log(clkh(μ^k+))] and [log(clkh(μ^k+exp([ϵ]G)))] can be calculated as shown in Equations (63) and (64), respectively.

[log(clkh(μ^k+))]=(ET^ls+iy)Tτzlt(μ^k+) (63)
graphic file with name sensors-20-06829-i009.jpg (64)

Clkh,k can be calculated using Equation (65).

Clkh,k=ϵ[log(clkh(μ^k+)1clkh(μ^k+exp([ϵ]G)))]|ϵ=0=ϵ[log(clkh(μ^k+exp([ϵ]G)))][log(clkh(μ^k+))]|ϵ=0 (65)

Substituting Equations (63) and (64) into Equation (65) gives us Equation (66).

graphic file with name sensors-20-06829-i010.jpg (66)
Knee Range of Motion Constraint

Thirdly, the knee ROM constraint is defined by Equation (69) and is only enforced if the knee angle, αlk, is outside the allowed ROM. The bounded knee angle, αlk, is calculated by Equation (67). Equation (69) is obtained by expanding Equation (67) to Equation (68) which when rearranged gives us [log(clkr(Xk))] (i.e., Lie group representation of Equation (26) in [15]). Note that rlszlt is the normalized thigh long axis expressed in the left shank frame.

αlk=tan1(rzls)Trzlt(rxls)Trzlt+π2,αlk=min(αlk,max,max(αlk,min,αlk)) (67)
rzlt·rzlsrzlt·rxls=sin(αlkπ2)cos(αlkπ2) (68)
[log(clkr(Xk))]=(E Tlsizcos(αlkπ2)ixsin(αlkπ2)rlszlt=longaxisofleftthighinshankframe)Tτzlt(Xk)R,[log(Dlkr)]=0 (69)

Following a similar procedure to Clkh,k (i.e., replace iy in Equation (64) with rlszlt) and taking Xk=μ^k+, Clkr,k can be calculated from clkr(μ^k+exp([ϵ]G))=[log(clkr(μ^k+))]+(E T^ls+rlszlt)TET^kp+[pplh]ϵTpT^kls+[plslk]ϵTls+(τzlt(μ^k+))TE T^ls+[rlszlt]ϵTls, as shown in Equation (70).

graphic file with name sensors-20-06829-i011.jpg (70)

3.3. Post-Processing

The orientation of the pelvis and shanks are obtained from the state μ˜k+. The orientation of the left thigh, R˜lt+, can be calculated using R˜lt+ = Inline graphic = Inline graphic, where r˜zlt+=τzlt(μ˜k+)/||τzlt(μ˜k+)||. The orientation of the right thigh, R˜rt+, is calculated similarly.

4. Experiment

An extension of the dataset from [15] was used to evaluate our L5S based algorithms. The movements involved are listed in Table 1 (note the addition of dynamic movements), and were collected from from nine healthy subjects (7 men and 2 women, weight 63.0±6.8 kg, height 1.70±0.06 m, age 24.6±3.9 years old), with no known gait abnormalities. Raw data were captured using a commercial IMC (i.e., Xsens Awinda) at 100 Hz sampling rate with IMUs attached to the pelvis and ankles, compared against a benchmark OMC (i.e., the setup followed Vicon Plug-in Gait protocol in a ∼4×4 m2 capture area). The experiment was approved by the Human Research Ethics Board of the University of New South Wales (UNSW) with approval number HC180413.

Table 1.

Types of movements done in the validation experiment.

Movement Description Duration Group
Walk Walk straight and return ∼30 s F
Figure-of-eight Walk along figure-of-eight path ∼60 s F
Zig-zag Walk along zig-zag path ∼60 s F
5-minute walk Unscripted walk and stand ∼300 s F
Speedskater Speedskater on the spot ∼30 s D
TUG Timed up-and-go test ∼30 s D
Jog Jog straight and return ∼30 s D
Jumping jacks Jumping jacks on the spot ∼30 s D
High-knee jog High-knee jog on the spot ∼30 s D

F denotes free walk, D denotes dynamic movements.

Frame alignment and yaw offset calibrations are similar to Section III-B of [15]. The experiments and algorithm were implemented using Matlab 2020a, with initial state μ˜0+ (i.e., position, orientation, and velocity) obtained from the OMC system (i.e., Vicon) and initial error covariance P0+ set to 0.5I27×27. The variance parameters for the process and measurement error covariance matrix Q and R are shown in Table 2.

Table 2.

Parameters for error covariance matrices, Q and R.

Q Parameters R Parameters
σa2 σω2 σori2 σmp2 σls2 and σrs2 σdl2 and σdr2 σda2 σlim2
(m2·s4) (rad2·s2) (rad2) (m2) (m2·s2 and m2) (m2) (m2) (m2)
10219 10319 119 0.1 [0.0113104] 10 1 10118

where 1n is an 1×n row vector with all elements equal to 1.

The inter-IMU distance measurements, d˘pla, d˘pra, and d˘lra, were simulated by calculating the distance from the mid-pelvis to the left and right ankles and adding normally distributed positional noise with different standard deviations (i.e., σdist{0,0.01,,0.1,0.15,0.2} m). Each trial was simulated five times.

Lastly, the evaluation was done using the following metrics: (1) Mean position and orientation root-mean-square error (RMSE) (e.g., similar to [15,17] as shown in Equations (71) and (72)), where pkb and Rkb are obtained from the benchmark OMC system, p˜kb+ and R˜kb+ are obtained from the algorithm. Note that as the global position of the estimate is still prone to drift due to the absence of an external global position reference, the root position of our system was set equal to that of the benchmark system (i.e., the mid-pelvis is placed at the origin in the world frame for all RMSE calculations). (2) Joint angles RMSE with bias removed (i.e., the mean difference between the angles over each entire trial was subtracted) and correlation coefficient (CC) of the hip in the sagittal (Y), frontal (X), and transverse (Z) planes and of the knee in the sagittal (Y) plane. Note that these joint angles are commonly used parameters in gait analysis. (3) Spatiotemporal gait parameters (e.g., total travelled distance (TTD) deviation, average stride length, and gait speed of the foot). Refer to Section III of [15] for more details.

epos,k=1NposbDP||pkbp˜kb+||,Npos=6,DP={lh,rh,lk,rk,la,ra} (71)
eori,k=1NoribDO||[log(Rkb(R˜kb+)T)]||,Nori=2,DO={lt,rt} (72)

5. Results

5.1. Mean Position and Orientation RMSE, Joint Angle RMSE and CC

In this experiment, multiple variations of the algorithm were tested as shown in Table 3. Firstly, L5S-3IMU is the algorithm described in this paper (Section 3) with parameters listed in Table 2. The parameter for L5S-3IMU were selected by taking the best joint CC (i.e., mean of free walk and dynamic movements) from a grid search of parameters σω2={1,10,102,103} rad2/s2 and σori2={102,101,1,10} rad2. Secondly, CKF-3IMU and CKF-3IMU+D were the algorithms described in [15,31], respectively. Thirdly, CKF-3I-KB is a modified CKF-3IMU using similar parameters, measurement, and constraint functions as L5S-3IMU. The key difference between CKF-3IMU and CKF-3I-KB is that CKF-3I-KB allows knee bending, denoted by the suffix KB, during the constraint update. Fourthly, L5S-3I-NO is a variation of L5S-3IMU with σω2=107 rad2/s2, σori2=101 rad2, and ω˘bk=0 rad. The parameters were chosen to have high uncertainty on the tracked orientation (i.e., effectively not using the orientation measurements at all), leading to a variation of L5S-3IMU that is similar to our prior work CKF-3IMU which assumed orientation measurements were noise-free. Lastly, the black box output (i.e., pelvis, thigh, and shank orientations) from the MVN Studio software (denoted as OSPS), which illustrates the performance of a widely-accepted commercial wearable IMC system with an OSPS configuration. For the first to fourth variations, the +D suffix means simulated inter-IMU distance measurements (σdist=0.1 m) was used instead of the pelvis height assumption.

Table 3.

The experiment was tested on the following algorithm variations.

Algorithm Inter-IMU
Distance
Summary Description
L5S-3IMU N Tracks position and orientation as described in Section 3 with parameters listed in Table 2.
L5S-3IMU+D Y
CKF-3IMU [15] N Only tracks position using a constrained KF.
CKF-3IMU+D [31] Y
CKF-3I-KB N Modified CKF-3IMU using similar parameters as L5S-3IMU (Table 2).
This also allows knee bending during the constraint update.
CKF-3I-KB+D Y
L5S-3I-NO N L5S-3IMU with parameters that assume noise-free orientation (NO) measurements like CKF-3IMU.
L5S-3I-NO+D Y
OSPS N Output from a commercial OSPS wearable IMC system.

Figure 4 shows the mean position and orientation RMSE, mean knee Y and hip joint angle RMSE (bias removed) and CC of different variations of CKF-3IMU and L5S-3IMU for both free walking and dynamic motions. Y, X, and Z refers to the sagittal, frontal, and transverse planes, respectively. CKF-3IMU performed well with free walking (epos=4.27 cm, eori=15.85, CC=0.66) [15]. However, a more extensive evaluation showed that it performed poorly for certain dynamic movements (e.g., high-knee jog with epos=18.15 cm, eori=24.87, CC=0.02). Removing the no-knee-bending assumption during the constraint update fixed this issue, as shown by the performance of CKF-3I-KB (e.g., high-knee jog improved by ∼9 cm epos, ∼9 eori, ∼0.4 CC). L5S-3I-NO which is the L5S version of CKF-3IMU expectedly have similar performance with CKF-3I-KB (i.e., Δepos<0.5 cm, Δeori<1, and ΔCC 0.02 differences). L5S-3IMU, which tracked both position and orientation while assuming there is noise in the orientation measurements, had a slightly better performance (e.g., improved jumping jacks and high-knee jog by ∼0.1 CC, <0.03 CC difference with other movement types). The use of simulated distance measurement with σdist=0.1 m on CKF-3I-KB, L5S-3I-NO, and L5S-3IMU had slight effects for free walking, and a significant improvement for dynamic movements. For free walking, joint angle RMSE and CC of L5S-3IMU+D compared to L5S-3IMU improved by ∼1 and <0.01 CC, while epos and eori slightly disimproved (<0.5 cm and <1 ). The similar results suggest that inferring pelvis position from simulated distance measurement (σdist=0.1 m) is comparable to our pelvis height assumption at least for free walking. For dynamic movements, the epos, eori, joint angle RMSE, and CC of L5S-3IMU+D improved by 2–16 cm, 0–40 , 1–9 , and <0.42, respectively—more significantly for movements TUG and high-knee jog.

Figure 4.

Figure 4

The performance of CKF, L5S, and OSPS with and without using inter-IMU distance measurements at each motion type.

To give insight on how the accuracy of the simulated inter-IMU distance measurements affect pose estimation performance, Figure 5 shows the mean of knee Y and hip joint angle RMSE and CC at different σdist values. At σdist=0.1 m, the simulation showed comparable performance between L5S-3IMU, which implements pelvis height assumption, and L5S-3IMU+D, which implements inter-IMU distance measurement to supplement the pelvis position estimate, for free walking. Significant improvement for dynamic movements can be seen even for σdist=0.2 m. These results suggest that the actual distance measurement sensor must have noise standard deviation σdist0.1 m to improve pose estimate performance. Note that the +D variation in Figure 4 and in the experiments that follow were evaluated at σdist=0.1 m.

Figure 5.

Figure 5

Joint angle RMSE (top) and CC (bottom) of free walk and dynamic movements at different noise level σdist. The broken lines represent L5S-3IMU results (denoted as nD) where inter-IMU distance measurements were not used. The solid lines represent L5S-3IMU+D results (denoted as +D) where we can observe slight and great improvements for free walk and dynamic movements, respectively.

5.2. Hip and Knee Joint Angle RMSE and CC

Figure 6 shows the knee and hip joint angle RMSE (bias removed) and CC of L5S-3IMU and L5S-3IMU+D compared against the OMC output. Y, X, and Z refers to the sagittal, frontal, and transverse planes, respectively. Turning movements and half steps were manually removed from the per-step result of Walk movement and was denoted as Straight Walk. Note that sensor-to-body calibration was only done at the beginning of trial, not for each step. Between L5S-3IMU and L5S-3IMU+D, there was minimal hip and knee joint angle RMSE and CC improvement for free walking (∼1 RMSE and ∼0.03 CC difference). However, there was significant improvement for most dynamic movements, specifically, speed-skater, jog, high-knee jog, and TUG (e.g., 417 knee Y and hip Y joint angle RMSE improvements). Furthermore, the CC for dynamic movements started to reach similar performance with the free walk movement, indicating that inter-IMU distance measurements have indeed made the pose estimator capable of tracking more ADLs and not just walking.

Figure 6.

Figure 6

The CC of knee (Y) and hip (Y, X, Z) joint angles for L5S-3IMU (denoted as nD) and L5S-3IMU+D (denoted as +D) at each motion type.

Figure 7 shows a sample walk trial. At the peaks of knee Y angle, the distance between the pelvis and ankle positions of L5S-3IMU+D were a few cm shorter (i.e., pelvis position was lower than actual while ankle position was higher) than the actual distance resulting in higher knee Y angle peaks. Violations of our biomechanical constraints are also apparent at t=4 to 5.5 s, where the subject makes a 180 turn. After the turn, L5S-3IMU and L5S-3IMU+D were able to recover during the straight walking (t=5.5 to 9.74 s of Figure 7). Notice that the bias between OSPS and OMC can be observed at t=0 of the hip Y joint angle.

Figure 7.

Figure 7

Knee (Y) and hip (Y, X, Z) joint angle output of L5S-3IMU in comparison with the benchmark system (Vicon) for a Walk trial. The subject walked straight from t=0 to 3 s, turned 180 around from t=3 to 5.5 s, and walked straight to the original starting point from 5.5 s until the end.

5.3. Spatiotemporal Gait Parameters

Table 4 shows the TTD, stride length, and gait speed accuracy computed from the global ankle position estimate of L5S-3IMU, L5S-3IMU+D, and the OMC system for free walk, jogging, and TUG. The use of inter-IMU distance measurements (σdist=0.1 m) helped improve the TTD, stride length, and gait speed accuracy of free walk and TUG (e.g., TTD improved from ∼9% to ∼5%). Refer to the code repository for links to videos of sample trials.

Table 4.

Total travelled distance (TTD) deviation from optical motion capture (OMC) system at the ankles.

Algo. Side TTD Stride Length (cm) Gait Speed (cm.s 1)
Error Actual Error Actual Error
% dev μ med μ ± σ RMS μ med μ ± σ RMS
Freewalk L 8.97% 91 99 8.1±6 9.9 70 74 6.0±5 7.7
L5S-3IMU R 9.00% 93 99 8.3±6 10.3 71 75 6.2±5 8.2
Freewalk L 5.23% 91 99 4.7±7 8.3 70 74 3.6±6 6.6
L5S-3IMU+D R 5.85% 93 99 5.4±8 9.4 71 75 4.1±6 7.4
Jog L 21.35% 81 86 17.4±23 28.5 107 118 19.2±33 38.0
L5S-3IMU R 26.79% 85 97 22.9±25 33.8 111 124 26.4±34 43.1
Jog L 22.40% 81 86 18.2±22 28.4 107 118 21.6±30 37.0
L5S-3IMU+D R 26.70% 85 97 22.8±24 32.8 111 124 27.5±31 41.4
TUG L 18.20% 74 76 13.5±18 22.1 58 60 10.0±15 18.0
L5S-3IMU R 20.98% 79 90 16.6±15 22.5 63 67 13.1±13 18.4
ine TUG L 3.80% 74 76 2.8±6 6.7 58 60 2.3±5 5.9
L5S-3IMU+D R 4.22% 79 90 3.3±6 6.8 63 67 2.7±5 5.6

where μ and σ denote mean and standard deviation. Error denotes estimate minus actual value, while TTD % dev denotes abs(error)/actualTTD.

6. Discussion

In this paper, a Lie group EKF algorithm for lower body pose estimation using only three IMUs, ergonomically placed on the ankles and sacrum to facilitate continuous recording outside the laboratory, was described and evaluated. The algorithm utilizes fewer sensors than other approaches reported in the literature, at the cost of reduced accuracy.

6.1. Mean Position and Orientation RMSE

The mean position and orientation RMSE of L5S-3IMU, L5S-3IMU+D, and related literature (sparse orientation poser (SOP) and sparse inertial poser (SIP) [17]) are listed in Table 5. SOP used orientation measured by IMUs and biomechanical constraints, while SIP used similar information but with the addition of acceleration. Both SOP and SIP were benchmarked against an OSPS system tracking the full body while our algorithm was benchmarked against an OMC system tracking only the lower body. The epos and eori (no bias) performance of L5S-3IMU and compared to SOP for free walking and jogging were comparable (Δepos 0.1–0.5 cm and Δeori 2.5–3 differences). The epos and eori (no bias) of SIP was better than L5S-3IMU and L5S-3IMU+D for free walking (∼2.1–2.5 cm and 6.5–7 difference). Although this improvement was expected, as SIP optimizes the pose over multiple frames whereas our algorithm, like CKF-3IMU, optimizes the pose for each individual frame. For jumping jacks, the eori of L5S-3IMU and L5S-3IMU+D was significantly (∼4–8) better than SOP’s and SIP’s. However, this difference is probably because both SOP and SIP were evaluated on the full body (our algorithm was only evaluated on the lower body) and errors in arm pose estimation may have increased eori for the SOP and SIP algorithms.

Table 5.

Mean position and orientation RMSE of L5S-3IMU, L5S-3IMU+D, OSPS, sparse orientation power (SOP) and sparse inertial poser (SIP) [17].

epos (cm) eori, No Bias (cm)
Free Walk Jog Jumping Jacks Free Walk Jog Jumping Jacks
L5S-3I 5.1±1.2 7.3±1.4 8.7±1.6 17.5±2.7 20.2±3.8 12.8±4.0
L5S-3I+D 5.5±1.0 6.2±1.1 4.9±0.9 18.0±2.5 17.4±3.2 12.6±3.2
OSPS 5.4±1.5 5.6±1.2 5.5±1.6 12.9±4.0 10.3±1.8 7.6±3.3
SOP [17] 5.0 8.0 8.0 15.0 22.0 20.0
SIP [17] 3.0 5.0 4.0 11.0 16.0 16.0

Comparing processing times, L5S-3IMU and L5S-3IMU+D were slower than CKF-3IMU, but can still be used in real-time; specifically, CKF-3IMU, L5S-3IMU, and L5S-3IMU+D processed a 1000-frame sequence (i.e., 10 s long) in ∼0.7, ∼2, ∼3.5 s, respectively, on an Intel Core i5-6500 3.2 GHz CPU [15], while SIP [17] took 7.5 min on a quad-core Intel Core i7 3.5 GHz CPU. All set-ups used single-core non-optimized Matlab code. Albeit slower than CKF-3IMU, L5S-3IMU and L5S-3IMU+D could also be used to provide real-time gait parameter measurement to inform actuation of assistive or rehabilitation robotic devices.

6.2. Hip and Knee Joint Angle RMSE and CC

The knee and hip joint angle RMSEs (no bias) and CCs of L5S-3IMU, L5S-3IMU+D, OSPS and related literature for straight walking (i.e., per step evaluation) are shown in Table 6 [7,15,37,38]. Similar to IMC based systems, L5S-3IMU and L5S-3IMU+D also follows the trend of having sagittal (Y axis) joint angles similar to that captured by OMC systems (0.95 knee Y and >0.83 hip Y CCs), but with significant difference in frontal and transverse (X and Z axis) joint angles [15,37]. CKF-3IMU performed slightly better (e.g., 0.03 knee Y, 0.09 hip Y CC), which is expected as the biomechanical constraint (i.e., no-knee-bending) assumption of CKF-3IMU was designed specifically for walking, at the cost of being less accurate for other dynamic movements. Both L5S-3IMU and L5S-3IMU+D were comparable, and at times even better (within 2.5 RMSE, 0.1 CC difference) than the results of Hu et al. and Tadano et al., indicating excellent per-step reconstruction in the sagittal plane [7,38]. Hu et al. used 4 IMUs (two at the pelvis and one on each foot) and Tadano et al. used an OSPS configuration. Both systems can only estimate the pose in the sagittal plane.

Table 6.

Knee and hip angle RMSE no bias (top) and CC (bottom) of CKF-3IMU, OSPS, and related literature for free walk.

Joint Angle RMSE () Knee Sagittal Hip Sagittal Hip Frontal Hip Transverse
L5S-3IMU 7.6±2.6 6.6±2.7 5.0±2.6 8.6±3.6
L5S-3IMU+D 7.1±2.1 7.5±2.1 5.1±2.3 8.9±3.7
OSPS 5.0±1.8 3.6±1.7 4.1±2.2 11.9±4.3
CKF-3IMU [15] 5.7±2.2 4.4±1.9 5.5±2.6 9.0±3.8
Cloete et al. [37] 8.5±5.0 5.8±3.8 7.3±5.2 7.9±4.9
Hu et al. [38] 4.9±3.5 6.8±3.0 - -
Tadano et al. [7] 10.1±1.0 7.9±1.0 - -
Joint Angle CC Knee Sagittal Hip Sagittal Hip Frontal Hip Transverse
L5S-3IMU 0.95±0.03 0.87±0.16 0.76±0.18 0.36±0.36
L5S-3IMU+D 0.95±0.03 0.83±0.14 0.72±0.19 0.29±0.37
OSPS 0.97±0.04 0.95±0.06 0.72±0.19 0.26±0.20
CKF-3IMU [15] 0.98±0.03 0.96±0.08 0.73±0.17 0.26±0.39
Cloete et al. [37] 0.89±0.15 0.94±0.08 0.55±0.40 0.54±0.20
Hu et al. [38] 0.95±0.04 0.97±0.04 - -
Tadano et al. [7] 0.97±0.02 0.98±0.01 - -

Despite the promising performance when using inter-IMU distance measurements, further validation with actual hardware implementation is needed, as the sensor noise in the real world may not necessarily follow a normal distribution and may be non-stationary.

For reference, portable ultrasound-based distance measurement can achieve millimetre accuracy with a sampling rate of 125 Hz [30], while a commercial UWB-based distance measurement devices can achieve ∼10 cm accuracy with a sampling rate of 200 Hz [39,40].

Lastly, despite L5S-3IMU and L5S-3IMU+D achieving 0.95 joint angle CCs in the sagittal plane, the unbiased joint angle RMSE (>5) makes its utility in clinical applications uncertain [41]. Although the algorithm is expected to work on pathological gait where our biomechanical assumptions are satisfied, overall performance still needs more improvement. To achieve clinical utility, one may either use more accurate sensors or average out cycle-to-cycle variation in estimation errors over many gait cycles; for example, use a more accurate distance measurement sensor (σdist<0.1 m). Furthermore, the accuracy must also be validated on a larger and more diverse cohort to quantify its ultimate clinical utility. The evaluation of how these solutions can bridge the gap to clinical application for the proposed system will be part of future work.

6.3. Spatiotemporal Gait Parameters

The focus of the proposed algorithms, L5S-3IMU and L5S-3IMU+D, are to estimate joint kinematics. However, as L5S-3IMU and L5S-3IMU+D both track the global position of the ankles, it is also capable of calculating spatiotemporal gait parameters (performance listed in Table 4). The TTD deviation of our algorithms compared against the gold standard OMC were not as good as CKF-3IMU [15] (3.63.81% TTD deviation) or other state-of-the-art dead reckoning algorithms [42,43] (0.21.5% TTD deviation). Two possible sources of inaccuracy lies (1) in the dead reckoning approximation done in the prediction step, and (2) in the assumption that the velocity of the shank IMU is zero when the associated foot touches the floor, but of course this IMU continues to move with some small velocity on the lower shank during the stance phase. To illustrate the dead reckoning approximation, let us look at the predicted pelvis pose in Equation (73). In our algorithm, we assumed ψpI3×3 (note that Φ(Δtω˘pk)I3×3 and R˜k1p+(R˘kp)TI3×3 since Δtω˘pk is small) which did not significantly affect the joint kinematic estimate, but slightly affected the global position estimate. Nevertheless, body drift has been reduced substantially compared to Marcard et al.’s SIP [17].

T^kp=T˜k1p+exp([(R˘kp)T(Δtv˜k1mp++Δt22a˘kp)Δtω˘pk])=R˜k1p+exp([Δtω˘pk])p˜k1mp++R˜k1p+Φ(Δtω˘pk)(R˘kp)TψpI3×3R˜k1p+Φ(Δtω˘pk)(R˘kp)T(Δtv˜k1mp++Δt22a˘kp)01×31 (73)

6.4. Limitations and Future Work

L5S has similar pelvis drift, covariance matrix numerical issue, and flat floor limitation as CKF-3IMU, which is expected as L5S implements the same measurement and constraint update as CKF-3IMU, albeit formulated using Lie group representation instead of vectors and quaternions [15]. The pelvis height and flat floor assumption helps prevent the pelvis and the ankles from drifting towards each other (i.e., pelvis drift downward while ankles drift upward). However, it will also prevent accurate pose estimation of motions such as sitting, lying down, or standing on one leg, where the pose is maintained for a duration much longer than that of a typical gait cycle. The covariance limiter (Section 3.2.2) helps prevent the covariance becoming badly conditioned (i.e., singular), especially for longer duration trials (e.g., 5-minute walk) where the position uncertainty grows at a faster rate for the pelvis position than the ankle position. As can be observed from Figure 6, substituting the pelvis height assumption with inter-IMU distance measurements can increase the algorithm’s accuracy especially for tracking dynamic movements. If the distance measurement is accurate enough (i.e., smaller σdist2), the inter-IMU distance measurement update may be enough to limit the growth of pelvis position uncertainly and possibly making the covariance limiter not needed.

Figure 6 shows that the optimized performance of L5S-3IMU, even if it allows the tracked orientation to be corrected by inter-IMU distance measurements and the tracked position estimate, was only slightly better than CKF-3IMU/L5S-3I-NO, which effectively assumed the measurement input from the orientation estimation algorithm to be perfect (i.e., trusted the tracked orientation less). As L5S-3IMU requires more computing resources, such result suggests that CKF-3IMU may be more suitable to use when computing power is limited. To fully leverage the advantages brought by the Lie group representation, additional sensor measurements that can help correct tracked orientation will be needed (e.g., estimating angle of arrival between two sensors [44] or using fish eye cameras to improve pose estimate [45]).

Additional sensor measurements provide new opportunities for automatic calibration even under RSC configuration. IMC systems typically need anthropometric measurements (i.e., measurement of body segments such as dls) beforehand. By taking the initial distance measurement at some predetermined posture, anthropometric measurements can be automatically inferred. The formulation for a hinge joint with two IMUs on both sides has been leveraged to enable automatic sensor-to-segment calibration (i.e., align sensor frame to body frame) and even a completely magnetometer free orientation estimation [46,47]. Magnetometer free orientation estimation rids us of the yaw offset issue from an inhomogeneous magnetic field in indoor environments, typically with stronger disturbances closer to the floor [48]. An approach using a hinge joint with two IMUs may not be applicable to RSC configurations (e.g., our algorithm only has one IMU on one side of the hinge joint). However, distance measurements may be use to compensate for the missing IMU information from the uninstrumented segment, and a modified version may be developed for a RSC configuration.

Enabling longer-term tracking of ADL in the subject’s natural environment may lead to novel investigations of movement disorder progression and the identification of early intervention opportunities. This work is just one of the early steps towards seamless remote gait monitoring. Developing solutions to further increase accuracy, increase the number of body segments tracked (e.g., track full body under RSC [17]), or use even fewer IMUs (tracking lower body using two IMUs [49]) will be investigated in the future.

7. Conclusions

This paper presented a Lie group CEKF-based algorithm (L5S-3IMU) to estimate lower limb kinematics using a RSC configuration of IMUs, supplemented by inter-IMU distance measurements in one implementation. The knee and hip joint angle RMSEs in the sagittal plane for straight walking were 7.6±2.6 and 6.6±2.7, respectively, while the CCs were 0.95±0.03 and 0.87±0.16, respectively. We also showed that inter-IMU distance measurement is a promising new source of information to improve the pose estimation of IMC under a RSC configuration. Simulations show that performance improved dramatically for dynamic movements even at higher noise levels (e.g., σdist=0.2 m), and that similar performance to L5S-3IMU was achieved at σdist=0.1 m for free walk movements. However, further validation is recommended with actual distance measurement from real sensors. The source code for the L5S algorithm, and links to sample videos will be made available at https://git.io/JTRQ3.

Acknowledgments

This research was supported by an Australian Government Research Training Program (RTP) Scholarship.

Abbreviations

The following abbreviations are used in this manuscript:

OMC Optical Motion Capture
IMC Inertial Motion Capture
IMU Inertial Measurement Unit
OSPS One Sensor per Body Segment
RSC Reduced-Sensor-Count
KF Kalman Filter
EKF Extended Kalman Filter
CEKF Constrained Extended Kalman Filter
ADL Activities of Daily Living
TTD Total Travelled Distance
SOP Sparse Orientation Poser
SIP Sparse Inertial Poser

Appendix A. Derivation of Pelvis-to-Ankle Distance Measurement

This section explains the derivation of the measurement pelvis-to-ankle vector (Equation (46)) as obtained from pelvis-to-ankle distance measurements, d˘kpla and d˘kpra, while assuming hinged knee joints and constant body segment lengths. For the sake of brevity, only the left side formulation is shown. The right side (i.e., pelvis to right ankle vector) can be calculated similarly.

First, we solve for an estimated left knee angle, θ^klk (Equation (47)), from the measured pelvis to left ankle distance, d˘kpla. The pelvis to left ankle vector, τmpla(μ^k,θklk) (Equation (A6)), can be defined as the sum of the mid-pelvis to hip, thigh long axis, and shank long axis vectors.

τpla(μ^k,θklk)=dp2T^kpiydlsT^klsizψpla=halfpelvisy-axis+shankz-axis+dltT^kls(ixsin(θklk)izcos(θklk))thighz-axisinshankframe (A1)

By definition of (d˘kpla)2 and expanding τmpla(μ^k,θklk) with Equation (A1), we obtain

(d˘kpla)2=(τpla(μ^k,θklk))Tτpla(μ^k,θklk)=ψplaTψpla2dltψplaTT^lsizcos(θklk)+2dltψplaTT^lsixsin(θklk)+(dlt)2 (A2)

Equation (A2) can be rearranged in the form of Equation (A3) with α, β, γ as shown in Equation (A4).

αcos(θklk)+βsin(θklk)=γ (A3)
α=2dltψplaTT^klsiz,β=2dltψplaTT^klsix,γ=(d˘kpla)2ψplaTψpla(dlt)2 (A4)

Solving for θ^klk from Equation (A3) gives us a quadratic equation with two solutions as shown in Equations (A5) and (47). Between the two solutions, θ^klk is set as the θ^klk whose value is closer to the current left knee angle estimate from the prediction step. This solution serves as a pseudomeasurement of the knee angle.

θ^klk=cos1αγ±βα2+β2γ2α2+β2 (A5)

Finally, Zpla,k, the KF measurement shown in Eqs. (A6) and (46), is the inter-IMU vector between the pelvis and left ankle, calculated using Equation (A1) with input θ^klk.

Zpla,k=τmpla(μ^k,θ^klk) (A6)

Author Contributions

Conceptualization, L.W.F.S.; methodology, L.W.F.S.; software, L.W.F.S.; validation, L.W.F.S.; formal analysis, L.W.F.S. and S.J.R.; investigation, L.W.F.S.; resources, S.J.R. and N.H.L.; data curation, L.W.F.S.; writing–original draft preparation, L.W.F.S.; writing–review and editing, L.W.F.S., S.J.R., and N.H.L.; visualization, L.W.F.S.; supervision, S.J.R. and N.H.L.; project administration, S.J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Merriaux P., Dupuis Y., Boutteau R., Vasseur P., Savatier X. A Study of Vicon System Positioning Performance. Sensors. 2017;17:1591. doi: 10.3390/s17071591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Glowinski S., Łosiński K., Kowiański P., Waśkow M., Bryndal A., Grochulska A. Inertial sensors as a tool for diagnosing discopathy lumbosacral pathologic gait: A preliminary research. Diagnostics. 2020;10:342. doi: 10.3390/diagnostics10060342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Rovini E., Maremmani C., Cavallo F. A wearable system to objectify assessment of motor tasks for supporting parkinson’s disease diagnosis. Sensors. 2020;20:2630. doi: 10.3390/s20092630. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Lloréns R., Gil-Gómez J.A., Alcañiz M., Colomer C., Noé E. Improvement in balance using a virtual reality-based stepping exercise: A randomized controlled trial involving individuals with chronic stroke. Clin. Rehabil. 2015;29:261–268. doi: 10.1177/0269215514543333. [DOI] [PubMed] [Google Scholar]
  • 5.Shull P., Lurie K., Shin M., Besier T., Cutkosky M. Haptic gait retraining for knee osteoarthritis treatment; Proceedings of the 2010 IEEE Haptics Symposium; Waltham, MA, USA. 25–26 March 2010; pp. 409–416. [Google Scholar]
  • 6.Roetenberg D., Luinge H., Slycke P. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technologies BV Tech. Rep. 2009;3:1–9. [Google Scholar]
  • 7.Tadano S., Takeda R., Miyagawa H. Three dimensional gait analysis using wearable acceleration and gyro sensors based on quaternion calculations. Sensors. 2013;13:9321–9343. doi: 10.3390/s130709321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Yoon P.K., Zihajehzadeh S., Kang B.S., Park E.J. Robust Biomechanical Model-Based 3-D Indoor Localization and Tracking Method Using UWB and IMU. IEEE Sens. J. 2017;17:1084–1096. doi: 10.1109/JSEN.2016.2639530. [DOI] [Google Scholar]
  • 9.Meng X.L., Zhang Z.Q., Sun S.Y., Wu J.K., Wong W.C. Biomechanical model-based displacement estimation in micro-sensor motion capture. Meas. Sci. Technol. 2012;23:055101. doi: 10.1088/0957-0233/23/5/055101. [DOI] [Google Scholar]
  • 10.Del Rosario M.B., Lovell N.H., Redmond S.J. Quaternion-based complementary filter for attitude determination of a smartphone. IEEE Sens. J. 2016;16:6008–6017. doi: 10.1109/JSEN.2016.2574124. [DOI] [Google Scholar]
  • 11.Del Rosario M.B., Khamis H., Ngo P., Lovell N.H., Redmond S.J. Computationally efficient adaptive error-state Kalman filter for attitude estimation. IEEE Sens. J. 2018;18:9332–9342. doi: 10.1109/JSEN.2018.2864989. [DOI] [Google Scholar]
  • 12.Tautges J., Zinke A., Krüger B., Baumann J., Weber A., Helten T., Müller M., Seidel H., Eberhardt B. Motion reconstruction using sparse accelerometer data. ACM Trans. Graph. (TOG) 2011;30:18. doi: 10.1145/1966394.1966397. [DOI] [Google Scholar]
  • 13.Huang Y., Kaufmann M., Aksan E., Black M.J., Hilliges O., Pons-Moll G. SIGGRAPH Asia 2018 Technical Papers, Tokyo, Japan. Association for Computing Machinery, Inc.; New York, NY, USA: 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. [Google Scholar]
  • 14.Salarian A., Burkhard P.R., Vingerhoets F.J.G., Jolles B.M., Aminian K. A novel approach to reducing number of sensing units for wearable gait analysis systems. IEEE Trans. Biomed. Eng. 2013;60:72–77. doi: 10.1109/TBME.2012.2223465. [DOI] [PubMed] [Google Scholar]
  • 15.Sy L.W., Raitor M., Del Rosario M.B., Khamis H., Kark L., Lovell N.H., Redmond S.J. Estimating lower limb kinematics using a reduced wearable sensor count. IEEE Trans. Biomed. Eng. 2020 doi: 10.1109/TBME.2020.3026464. in press. [DOI] [PubMed] [Google Scholar]
  • 16.Lin J.F., Kulić D. Human pose recovery using wireless inertial measurement units. Physiol. Meas. 2012;33:2099–2115. doi: 10.1088/0967-3334/33/12/2099. [DOI] [PubMed] [Google Scholar]
  • 17.von Marcard T., Rosenhahn B., Black M.J., Pons-Moll G. Computer Graphics Forum. Volume 36. Wiley Online Library; Hoboken, NJ, USA: 2017. Sparse inertial poser: Automatic 3D human pose estimation from sparse IMUs; pp. 349–360. [Google Scholar]
  • 18.Barfoot T.D. State Estimation for Robotics. Cambridge University Press; Cambridge, UK: 2017. [Google Scholar]
  • 19.Wang Y., Chirikjian G.S. Error propagation on the Euclidean group with applications to manipulator kinematics. IEEE Trans. Robot. 2006;22:591–602. doi: 10.1109/TRO.2006.878978. [DOI] [Google Scholar]
  • 20.Barfoot T.D., Furgale P.T. Associating uncertainty with three-dimensional poses for use in estimation problems. IEEE Trans. Robot. 2014;30:679–693. doi: 10.1109/TRO.2014.2298059. [DOI] [Google Scholar]
  • 21.Bourmaud G., Mégret R., Arnaudon M., Giremus A. Continuous-discrete extended Kalman filter on matrix Lie groups using concentrated Gaussian distributions. J. Math. Imaging Vis. 2014;51:209–228. doi: 10.1007/s10851-014-0517-0. [DOI] [Google Scholar]
  • 22.Brossard M., Bonnabel S., Condomines J.P. Unscented Kalman filtering on Lie groups; Proceedings of the IEEE International Conference on Intelligent Robots and Systems; Vancouver, BC, Canada. 24–28 September 2017; pp. 2485–2491. [Google Scholar]
  • 23.Ćesić J., Joukov V., Petrović I., Kulić D. Full body human motion estimation on lie groups using 3D marker position measurements; Proceedings of the IEEE-RAS International Conference on Humanoid Robots; Cancun, Mexico. 15–17 November 2016; pp. 826–833. [Google Scholar]
  • 24.Joukov V., Cesic J., Westermann K., Markovic I., Petrovic I., Kulic D. Estimation and observability analysis of human motion on Lie groups. IEEE Trans. Cybern. 2019;50:1–12. doi: 10.1109/TCYB.2019.2933390. [DOI] [PubMed] [Google Scholar]
  • 25.Joukov V., Cesic J., Westermann K., Markovic I., Kulic D., Petrovic I. Human motion estimation on Lie groups using IMU measurements; Proceedings of the IEEE International Conference on Intelligent Robots and Systems; Vancouver, BC, Canada. 24–28 September 2017; pp. 1965–1972. [Google Scholar]
  • 26.Hol J.D., Dijkstra F., Luinge H., Schon T.B. Tightly coupled UWB/IMU pose estimation; Proceedings of the 2009 IEEE International Conference on Ultra-Wideband; Vancouver, BC, Canada. 9–11 September 2009; pp. 688–692. [Google Scholar]
  • 27.Malleson C., Gilbert A., Trumble M., Collomosse J., Hilton A. Real-time full-body motion capture from video and IMUs; Proceedings of the International Conference on 3D Vision (3DV); Qingdao, China. 10–12 October 2017. [Google Scholar]
  • 28.Gilbert A., Trumble M., Malleson C., Hilton A., Collomosse J. Fusing visual and inertial sensors with semantics for 3D human pose estimation. Int. J. Comput. Vis. 2019;127:381–397. doi: 10.1007/s11263-018-1118-y. [DOI] [Google Scholar]
  • 29.Helten T., Muller M., Seidel H.P., Theobalt C. Real-time body tracking with one depth camera and inertial sensors; Proceedings of the IEEE International Conference on Computer Vision; Sydney, Australia. 8–12 April 2013; pp. 1105–1112. [Google Scholar]
  • 30.Vlasic D., Adelsberger R., Vannucci G., Barnwell J., Gross M., Matusik W., Popović J. Practical motion capture in everyday surroundings. ACM Trans. Graph. (TOG) 2007;26:35. doi: 10.1145/1276377.1276421. [DOI] [Google Scholar]
  • 31.Sy L., Lovell N.H., Redmond S.J. Estimating lower limb kinematics using distance measurements with a reduced wearable inertial sensor count; Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Montreal, QC, Canada. 20–24 July 2020; [DOI] [PubMed] [Google Scholar]
  • 32.Sy L., Lovell N.H., Redmond S.J. Estimating lower limb kinematics using a Lie group constrained EKF and a reduced wearable IMU count; Proceedings of the 2020 8th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob); New York, NY, USA. 29 November–1 December 2020. [Google Scholar]
  • 33.Selig J.M. Computational Noncommutative Algebra and Applications. Springer; Berlin/Heidelberg, Germany: 2004. Lie groups and Lie algebras in robotics; pp. 101–125. [Google Scholar]
  • 34.Stillwell J. Naive Lie Theory. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2008. [Google Scholar]
  • 35.Chirikjian G. Stochastic Models, Information Theory, and Lie Groups. II: Analytic Methods and Modern Applications. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2012. [Google Scholar]
  • 36.Bourmaud G., Megret R., Giremus A., Berthoumieu Y. Discrete extended Kalman filter on Lie groups; Proceedings of the European Signal Processing Conference; Marrakech, Morocco. 9–13 September 2013; pp. 1–5. [Google Scholar]
  • 37.Cloete T., Scheffer C. Benchmarking of a full-body inertial motion capture system for clinical gait analysis; Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Vancouver, BC, Canada. 20–25 August 2008; pp. 4579–4582. [DOI] [PubMed] [Google Scholar]
  • 38.Hu X., Yao C., Soh G.S. Performance evaluation of lower limb ambulatory measurement using reduced inertial measurement units and 3R gait model; Proceedings of the IEEE International Conference on Rehabilitation Robotics; Singapore. 11–14 August 2015; pp. 549–554. [Google Scholar]
  • 39.Malajner M., Planinsic P., Gleich D. UWB ranging accuracy; Proceedings of the 2015 22nd International Conference on Systems, Signals and Image Processing; London, UK. 10–12 September 2015; pp. 61–64. [Google Scholar]
  • 40.Ledergerber A., D’Andrea R. Ultra-wideband range measurement model with Gaussian processes; Proceedings of the 1st Annual IEEE Conference on Control Technology and Applications; Hawaii, HI, USA. 27–30 August 2017; pp. 1929–1934. [Google Scholar]
  • 41.McGinley J.L., Baker R., Wolfe R., Morris M.E. The reliability of three-dimensional kinematic gait measurements: A systematic review. Gait Posture. 2009;29:360–369. doi: 10.1016/j.gaitpost.2008.09.003. [DOI] [PubMed] [Google Scholar]
  • 42.Jimenez A.R., Seco F., Prieto J.C., Guevara J. Indoor Pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU; Proceedings of the 2010 7th Workshop on Positioning, Navigation and Communication; Dresden, Germany. 11–12 March 2010; pp. 135–143. [Google Scholar]
  • 43.Zhang W., Li X., Wei D., Ji X., Yuan H. A foot-mounted PDR System Based on IMU/EKF+HMM+ ZUPT+ZARU+HDR+compass algorithm; Proceedings of the 2017 International Conference on Indoor Positioning and Indoor Navigation; Sapporo, Japan. 18–21 September 2017; pp. 1–5. [Google Scholar]
  • 44.Dotlic I., Connell A., Ma H., Clancy J., McLaughlin M. Angle of arrival estimation using decawave DW1000 integrated circuits; Proceedings of the 2017 14th Workshop on Positioning, Navigation and Communications; Bremen, Germany. 25–26 October 2017; pp. 1–6. [Google Scholar]
  • 45.Xu W., Chatterjee A., Zollh M., Rhodin H., Fua P., Seidel H.p., Theobalt C. Mo2Cap2: Real-time mobile 3D motion capture with a cap-mounted fisheye camera. IEEE Trans. Vis. Comput. Graph. 2019;25:2093–2101. doi: 10.1109/TVCG.2019.2898650. [DOI] [PubMed] [Google Scholar]
  • 46.Laidig D., Schauer T., Seel T. Exploiting kinematic constraints to compensate magnetic disturbances when calculating joint angles of approximate hinge joints from orientation estimates of inertial sensors; Proceedings of the IEEE International Conference on Rehabilitation Robotics; London, UK. 17–20 July 2017; pp. 971–976. [DOI] [PubMed] [Google Scholar]
  • 47.Eckhoff K., Kok M., Lucia S., Seel T. Sparse magnetometer-free inertial motion tracking—A condition for observability in double hinge joint systems. arXiv. 20202002.00902 [Google Scholar]
  • 48.de Vries W.H., Veeger H.E., Baten C.T., van der Helm F.C. Magnetic distortion in motion labs, implications for validating inertial magnetic sensors. Gait Posture. 2009;29:535–541. doi: 10.1016/j.gaitpost.2008.12.004. [DOI] [PubMed] [Google Scholar]
  • 49.Li T., Wang L., Li Q., Liu T. Lower-body walking motion estimation using only two shank-mounted inertial measurement units (IMUs); Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics; Boston, MA, USA. 6–9 July 2020; pp. 1143–1148. [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES