Skip to main content
Springer logoLink to Springer
. 2016 Jun 9;121(1):26–64. doi: 10.1007/s11263-016-0916-3

A Unified Framework for Compositional Fitting of Active Appearance Models

Joan Alabort-i-Medina 1,, Stefanos Zafeiriou 1
PMCID: PMC7175667  PMID: 32355408

Abstract

Active appearance models (AAMs) are one of the most popular and well-established techniques for modeling deformable objects in computer vision. In this paper, we study the problem of fitting AAMs using compositional gradient descent (CGD) algorithms. We present a unified and complete view of these algorithms and classify them with respect to three main characteristics: (i) cost function; (ii) type of composition; and (iii) optimization method. Furthermore, we extend the previous view by: (a) proposing a novel Bayesian cost function that can be interpreted as a general probabilistic formulation of the well-known project-out loss; (b) introducing two new types of composition, asymmetric and bidirectional, that combine the gradients of both image and appearance model to derive better convergent and more robust CGD algorithms; and (c) providing new valuable insights into existent CGD algorithms by reinterpreting them as direct applications of the Schur complement and the Wiberg method. Finally, in order to encourage open research and facilitate future comparisons with our work, we make the implementation of the algorithms studied in this paper publicly available as part of the Menpo Project (http://www.menpo.org).

Keywords: Active appearance models, Non-linear optimization, Compositional gradient descent, Bayesian inference, Asymmetric and bidirectional composition, Schur complement, Wiberg algorithm

Introduction

Active appearance models (AAMs) (Cootes et al. 2001; Matthews and Baker 2004) are one of the most popular and well-established techniques for modeling and segmenting deformable objects in computer vision. AAMs are generative parametric models of shape and appearance that can be fitted to images to recover the set of model parameters that best describe a particular instance of the object being modeled.

Fitting AAMs is a non-linear optimization problem that requires the minimization (maximization) of a global error (similarity) measure between the input image and the appearance model. Several approaches (Cootes et al. 2001; Hou et al. 2001; Matthews and Baker 2004; Batur and Hayes 2005; Gross et al. 2005; Donner et al. 2006; Papandreou and Maragos 2008; Liu 2009; Saragih and Göcke 2009; Amberg et al. 2009; Tresadern et al. 2010; Martins et al. 2010; Sauer et al. 2011; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014; Antonakos et al. 2014) have been proposed to define and solve the previous optimization problem. Broadly speaking, they can be divided into two different groups:

  • Regression based (Cootes et al. 2001; Hou et al. 2001; Batur and Hayes 2005; Donner et al. 2006; Saragih and Göcke 2009; Tresadern et al. 2010; Sauer et al. 2011)

  • Optimization based (Matthews and Baker 2004; Gross et al. 2005; Papandreou and Maragos 2008; Amberg et al. 2009; Martins et al. 2010; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014)

Regression based techniques attempt to solve the problem by learning a direct function mapping between the error measure and the optimal values of the parameters. Most notable approaches include variations on the original (Cootes et al. 2001) fixed linear regression approach of Hou et al. (2001), Donner et al. (2006), the adaptive linear regression approach of Batur and Hayes (2005), and the works of Saragih and Göcke (2009) and Tresadern et al. (2010) which considerably improved upon previous techniques by using boosted regression. Also, Cootes and Taylor (2001) and Tresadern et al. (2010) showed that the use of non-linear gradient-based and Haar-like appearance representations, respectively, lead to better fitting accuracy in regression based AAMs.

Optimization based methods for fitting AAMs were proposed by Matthews and Baker in Matthews and Baker (2004). These techniques are known as compositional gradient decent (CGD) algorithms and are based on direct analytical optimization of the error measure. Popular CGD algorithms include the very efficient project-out Inverse Compositional (PIC) algorithm (Matthews and Baker 2004), the accurate but costly Simultaneous Inverse Compositional (SIC) algorithm (Gross et al. 2005), and the more efficient versions of SIC presented in Papandreou and Maragos (2008) and Tzimiropoulos and Pantic (2013). Lucey et al. (2013) extended these algorithms to the Fourier domain to efficiently enable convolution with Gabor filters, increasing their robustness; and the authors of Antonakos et al. (2014) showed that optimization based AAMs using non-linear feature based (e.g. SIFT Lowe 1999 and HOG Dalal and Triggs 2005) appearance models were competitive with modern state-of-the-art techniques in non-rigid face alignment (Xiong and De la Torre 2013; Asthana et al. 2013) in terms of fitting accuracy.

AAMs have often been criticized for several reasons: (i) the limited representational power of their linear appearance model; (ii) the difficulty of optimizing shape and appearance parameters simultaneously; and (iii) the complexity involved in handling occlusions. However, recent works in this area (Papandreou and Maragos 2008; Saragih and Göcke 2009; Tresadern et al. 2010; Lucey et al. 2013; Tzimiropoulos and Pantic 2013; Antonakos et al. 2014) suggest that these limitations might have been over-stressed in the literature and that AAMs can produce highly accurate results if appropriate training data (Tzimiropoulos and Pantic 2013), appearance representations (Tresadern et al. 2010; Lucey et al. 2013; Antonakos et al. 2014) and fitting strategies (Papandreou and Maragos 2008; Saragih and Göcke 2009; Tresadern et al. 2010; Tzimiropoulos and Pantic 2013) are employed.

In this paper, we study the problem of fitting AAMs using CGD algorithms thoroughly. Summarizing, our main contributions are:

  • To present a unified and complete overview of the most relevant and recently published CGD algorithms for fitting AAMs (Matthews and Baker 2004; Gross et al. 2005; Papandreou and Maragos 2008; Amberg et al. 2009; Martins et al. 2010; Tzimiropoulos et al. 2012; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014). To this end, we classify CGD algorithms with respect to three main characteristics: (i) the cost function defining the fitting problem; (ii) the type of composition used; and (iii) the optimization method employed to solve the non-linear optimization problem.

  • To review the probabilistic interpretation of AAMs and propose a novel Bayesian formulation 1 of the fitting problem. We assume a probabilistic model for appearance generation with both Gaussian noise and a Gaussian prior over a latent appearance space. Marginalizing out the latent appearance space, we derive a novel cost function that only depends on shape parameters and that can be interpreted as a valid and more general probabilistic formulation of the well-known project-out cost function (Matthews and Baker 2004). Our Bayesian formulation is motivated by seminal works on probabilistic component analysis and object tracking (Moghaddam and Pentland 1997; Roweis 1998; Tipping and Bishop 1999).

  • To propose the use of two novel types of composition for AAMs: (i) asymmetric; and (ii) bidirectional. These types of composition have been widely used in the related field of parametric image alignment (Malis 2004; Mégret et al. 2008; Autheserre et al. 2009; Mégret et al. 2010) and use the gradients of both image and appearance model to derive better convergent and more robust CGD algorithms.

  • To provide valuable insights into existent strategies used to derive fast and exact simultaneous algorithms for fitting AAMs by reinterpreting them as direct applications of the Schur complement (Boyd and Vandenberghe 2004) and the Wiberg method (Okatani and Deguchi 2006; Strelow 2012).

The remainder of the paper is structured as follows. Section 2 introduces AAMs and reviews their probabilistic interpretation. Section 3 constitutes the main section of the paper and contains the discussion and derivations related to the cost functions Sect. 3.1; composition types Sect. 3.2; and optimization methods Sect. 3.3. Implementation details and experimental results are reported in Sect. 5. Finally, conclusions are drawn in Sect. 6.

Active Appearance Models

AAMs (Cootes et al. 2001; Matthews and Baker 2004) are generative parametric models that explain visual variations, in terms of shape and appearance, within a particular object class. AAMs are built from a collection of images (Fig. 1) for which the spatial position of a sparse set of v landmark points xi=(xi,yi)TR2 representing the shape s=(x1,y1,,xv,yv)TR2v×1 of the object being modeled have been manually defined a priori.

Fig. 1.

Fig. 1

Exemplar images from the Labelled Faces Parts in-the-Wild (LFPW) dataset (Belhumeur et al. 2011) for which a consistent set of sparse landmarks representing the shape of the object being model (human face) has been manually defined (Sagonas et al. 2013a, b)

AAMs are themselves composed of three different models: (i) shape model; (ii) appearance model; and (iii) motion model.

The shape model, which is also referred to as Point Distribution Model (PDM), is obtained by typically applying Principal Component Analysis (PCA) to the set of object’s shapes. The resulting shape model is mathematically expressed as:

s=s¯+i=1npisi=s¯+Sp 1

where s¯R2v×1 is the mean shape, and SR2v×n and pRn×1 denote the shape bases and shape parameters, respectively. In order to allow a particular shape instance s to be arbitrarily positioned in space, the previous model can be augmented with a global similarity transform. Note that this normally requires the initial shapes to be normalized with respect to the same type of transform (typically using Procrustes Analysis (PA)) before PCA is applied. This results in the following expression for each landmark point of the shape model:

xi=sRx¯i+Xip+t 2

where s, RR2×2 and tR2 denote the scale, rotation and translation applied by the global similarity transform, respectively. Using the orthonormalization procedure described in Matthews and Baker (2004) the final expression for the shape model can be compactly written as the linear combination of a set of bases:

s=s¯+i=14pisi+i=1npisi=s¯+Sp 3

where S=(s1,,s4,s1,,sn)R2v×(4+n) and p=(p1,,p4,p1,,pn)TR(4+n)×1 are redefined as the concatenation of the similarity bases si and similarity parameters pi with the original S and p, respectively.

The appearance model is obtained by warping the original images onto a common reference frame (typically defined in terms of the mean shape s¯) and applying PCA to the obtained warped images. Mathematically, the appearance model is defined by the following expression:

A(x)=A¯(x)+i=1mciAi(x) 4

where xΩ denote all pixel positions on the reference frame, and A¯(x), Ai(x) and ci denote the mean texture, the appearance bases and appearance parameters, respectively. Denoting a=vec(A(x)) as the vectorized version of the previous appearance instance, Eq. 4 can be concisely written in vector form as:

a=a¯+Ac 5

where aRF×1 is the mean appearance, and ARF×m and cRm×1 denote the appearance bases and appearance parameters, respectively.

The role of the motion model, denoted by W(x;p), is to extrapolate the position of all pixel positions xΩ from the reference frame to a particular shape instance s (and vice-versa) based on their relative position with respect to the sparse set of landmarks defining the shape model (for which direct correspondences are always known). Classic motion models for AAMs are PieceWise Affine (PWA) (Cootes and Taylor 2004; Matthews and Baker 2004) and thin plate splines (TPS) (Cootes and Taylor 2004; Papandreou and Maragos 2008) warps.

Given an image I containing the object of interest, its manually annotated ground truth shape s, and a particular motion model W(x,p); the two main assumptions behind AAMs are:

  1. The ground truth shape of the object can be well approximated by the shape model
    ss¯+Sp 6
  2. The object’s appearance can be well approximated by the appearance model after the image is warped, using the motion model and the previous shape approximation, onto the reference frame:
    i[p]a¯+Ac 7
    where i[p]=vec(I(W(x;p))) denotes the vectorized version of the warped image. Note that, the warp W(x;p) which explicitly depends on the shape parameters p, relates the shape and appearance models and is a central part of the AAMs formulation.

Because of the explicit use of the motion model, the two previous assumptions provide a concise definition of AAMs. At this point, it is worth mentioning that the vector notation of Eqs. 6 and 7 will be, in general, the preferred notation in this paper.

Probabilistic Formulation

A probabilistic interpretation of AAMs can be obtained by rewriting Eqs. 6 and 7 assuming probabilistic models for shape and appearance generation. In this paper, motivated by seminal works on Probabilistic Component Analysis (PPCA) and object tracking (Tipping and Bishop 1999; Roweis 1998; Moghaddam and Pentland 1997), we will assume probabilistic models for shape and appearance generation with both Gaussian noise and Gaussian priors over the latent shape and appearance spaces2:

s=s¯+Sp+εpN0,ΛεN0,ς2I 8
i[p]=a¯+Ac+ϵcN0,ΣϵN0,σ2I 9

where the diagonal matrices Λ=diag(λs1,,λsm) and Σ=diag(λa1,,λam) contain the eigenvalues associated to shape and appearance eigenvectors respectively and where ς2 and σ2 denote the estimated shape and image noise3 respectively.

This probabilistic formulation will be used to derive Maximum-Likelihood (ML), Maximum A Posteriori (MAP) and Bayesian cost functions for fitting AAMs in Sects. 3.1.1 and 3.1.2.

Fitting Active Appearance Models

Several techniques have been proposed to fit AAMs to images (Cootes et al. 2001; Hou et al. 2001; Matthews and Baker 2004; Batur and Hayes 2005; Gross et al. 2005; Donner et al. 2006; Papandreou and Maragos 2008; Liu 2009; Saragih and Göcke 2009; Amberg et al. 2009; Tresadern et al. 2010; Martins et al. 2010; Sauer et al. 2011; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014; Antonakos et al. 2014). In this paper, we will center the discussion around compositional gradient descent (CGD) algorithms (Matthews and Baker 2004; Gross et al. 2005; Papandreou and Maragos 2008; Amberg et al. 2009; Martins et al. 2010; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014) for fitting AAMs. Consequently, we will not review regression based approaches. For more details on these type of methods the interested reader is referred to the existent literature (Cootes et al. 2001; Hou et al. 2001; Batur and Hayes 2005; Donner et al. 2006; Liu 2009; Saragih and Göcke 2009; Tresadern et al. 2010; Sauer et al. 2011.

The following subsections present a unified and complete view of CGD algorithms by classifying them with respect to their three main characteristics: (a) cost function (Sect. 3.1); (b) type of composition (Sect. 3.2); and (c) optimization method (Sect. 3.3).

Cost Function

AAM fitting is typically formulated as the (regularized) search over the shape and appearance parameters that minimize a global error measure between the vectorized warped image and the appearance model:

p,c=argminp,cR(p,c)+D(i[p],c) 10

where D is a data term that quantifies the global error measure between the vectorized warped image and the appearance model and R is an optional regularization term that penalizes complex shape and appearance deformations.

Sum of Squared Differences

Arguably, the most natural choice for the previous data term is the Sum of Squared Differences (SSD) between the vectorized warped image and the linear appearance model4. Consequently, the classic AAM fitting problem is defined by the following non-linear optimization problem5:

p,c=argminp,c12rTr=argminp,c12i[p]-a¯+Ac2D(i[p],c) 11

On the other hand, considering regularization, the most natural choice for R is the sum of 22-norms over the shape and appearance parameters. In this case, the regularized AAM fitting problem is defined as follows:

p,c=argminp,c12||p||2+12||c||2+12rTr=argminp,c12||p||2+12||c||2R(p,c)+12||i[p]-(a¯+Ac)||2D(i[p],c) 12

Probabilistic Formulation

A probabilistic formulation of the previous cost function can be naturally derived using the probabilistic generative models introduced in Sect. 2.1. Denoting the models’ parameters as Θ={s¯,S,Λ,a¯,A,Σ,σ2} a ML formulation can be derived as follows:

p,c=argmaxp,cp(i[p]|p,c,Θ)=argmaxp,clnp(i[p]|p,c,Θ)=argminp,c12σ2||i[p]-(a¯+Ac)||2D(i[p],c) 13

and a MAP formulation can be similarly derived by taking into account the prior distributions over the shape and appearance parameters:

p,c=argmaxp,cp(p,c,i[p]|Θ)=argmaxp,cp(p|Λ)p(c|Σ)p(i[p]|p,c,Θ)=argmaxp,clnp(p|Λ)+lnp(c|Σ)+lnp(i[p]|p,c,Θ)=argminp,c12||p||Λ-12+12||c||Σ-12R(p,c)+12σ2||i[p]-(a¯+Ac)||2D(i[p],c) 14

where we have assumed the shape and appearance parameters to be independent6.

The previous ML and MAP formulations are weighted version of the optimization problem defined by Eqs. 11 and 12. In both cases, the maximization of the conditional probability of the vectorized warped image given the shape, appearance and model parameters leads to the minimization of the data term D and, in the MAP case, the maximization of the prior probability over the shape and appearance parameters leads to the minimization of the regularization term R.

Project-Out

Matthews and Baker showed in Matthews and Baker (2004) that one could express the SSD between the vectorized warped image and the linear PCA-based7 appearance model as the sum of two different terms:

12rTr=12rT(AAT+I-AAT)r=12rT(AAT)r+12rT(I-AAT)r=12i[p]-a¯+AcAAT2+12i[p]-a¯+AcI-AAT2=f1(p,c)+f2(p,c) 15

The first term defines the distance within the appearance subspace and it is always 0 regardless of the value of the shape parameters p:

f1(p,c)=12i[p]-a¯+AcAAT2=12i[p]TAcTATi[p]c-2i[p]TAcTATa¯00-2i[p]TAcTATAIcc+a¯TA0TATa¯00+2a¯TA0TATAIc0+cTATAIcTATAIcc=12(cTc-2cTc+cTc)=0 16

The second term measures the distance to the appearance subspace i.e. the distance within its orthogonal complement. After some algebraic manipulation, one can show that this term reduces to a function that only depends on the shape parameters p:

f2(p,c)=12i[p]-a¯+AcA¯2=12i[p]TA¯i[p]-2i[p]TA¯a¯-2i[p]TA¯Ac0+a¯TA¯a¯+2a¯TA¯Ac0+cTATA¯Ac0=12(i[p]TA¯i[p]-2i[p]TA¯a¯+a¯TA¯a¯)=12i[p]-a¯A¯2 17

where, for convenience, we have defined the orthogonal complement to the appearance subspace as A¯=I-AAT. Note that, as mentioned above, the previous term does not depend on the appearance parameters c:

f2(p,c)=f^2(p)=12i[p]-a¯A¯2 18

Therefore, using the previous project-out trick, the minimization problems defined by Eqs. 11 and 12 reduce to:

p=argminp12||i[p]-a¯||A¯2D(i[p]) 19

and

p=argminp12||p||2R(p)+12||i[p]-a¯||A¯2D(i[p]) 20

respectively.

Probabilistic Formulation

Assuming the probabilistic models defined in Sect. 2.1, a Bayesian formulation of the previous project-out data term can be naturally derived by marginalizing over the appearance parameters to obtain the following marginalized density:

p(i[p]|p,Θ)=cp(i[p]|p,c,Θ)p(c|Σ)dc=N(a¯,AΣAT+σ2I) 21

and applying the Woodbury formula8 Woodbury (1950) to decompose the natural logarithm of the previous density into the sum of two different terms:

lnp(i[p]|p,Θ)=12||i[p]-a¯||(AΣAT+σ2I)-12=12||i[p]-a¯||AD-1AT2+12σ2||i[p]-a¯||A¯2 22

where D=diag(λa1+σ2,,λam+σ2).

As depicted by Fig. 2, the previous two terms define respectively: (i) the Mahalanobis distance within the linear appearance subspace; and (ii) the Euclidean distance to the linear appearance subspace (i.e. the Euclidean distance within its orthogonal complement) weighted by the inverse of the estimated image noise. Note that when the variance Σ of the prior distribution over the latent appearance space increases (and especially as Σ) c becomes uniformly distributed and the contribution of the first term 12||i[p]-a¯||AD-1AT2 vanishes; in this case, we obtain a weighted version of the project-out data term defined by Eq. 19. Hence, given our Bayesian formulation, the project-out loss arises naturally by assuming a uniform prior over the latent appearance space.

Fig. 2.

Fig. 2

The fits AAMs by minimizing two different distances: (i) the Mahalanobis distance within the linear appearance subspace; and (ii) the Euclidean distance to the linear appearance subspace (i.e. the Euclidean distance within its orthogonal complement) weighted by the inverse of the estimated image noise

The probabilistic formulations of the minimization problems defined by Eqs. 19 and 20 can be derived, from the previous Bayesian Project-Out (BPO) cost function, as

p=argmaxplnp(i[p]|p,Θ)=argminp12σ2||i[p]-a¯||Q2D(i[p]) 23

and

p=argmaxpp(p,i[p]|Θ)=argmaxpp(p|Λ)p(i[p]|p,Θ)=argmaxplnp(p|Λ)+lnp(i[p]|p,Θ)=argminp12||p||Λ-12R(p)+12σ2||i[p]-a¯||Q2D(i[p]) 24

respectively. Where we have defined the BPO operator as Q=I-A(I-σ2D-1)AT.

Type of Composition

Assuming, for the time being, that the true appearance parameters c are known, the problem defined by Eq. 11 reduces to a non-rigid image alignment problem (Baker and Matthews 2004; Muñoz et al. 2014) between the particular instance of the object present in the image and its optimal appearance reconstruction by the appearance model:

p=argminp12i[p]-a2 25

where a=a¯+Ac is obtained by directly evaluating Eq. 4 given the true appearance parameters c.

CGD algorithms iteratively solve the previous non-linear optimization problem with respect to the shape parameters p by:

  1. Introducing an incremental warp W(x;Δp) according to the particular composition scheme being used.

  2. Linearizing the previous incremental warp around the identity warp W(x;Δp)=W(x;0)=x.

  3. Solving for the parameters Δp of the incremental warp.

  4. Updating the current warp estimate by using an appropriate compositional update rule.

  5. Going back to Step 1 until a particular convergence criterion is met.

Existent CGD algorithms for fitting AAMs have introduced the incremental warp either on the image or the model sides in what are known as forward and inverse compositional frameworks (Matthews and Baker 2004; Gross et al. 2005; Papandreou and Maragos 2008; Amberg et al. 2009; Martins et al. 2010; Tzimiropoulos and Pantic 2013) respectively. Inspired by related works in field of image alignment (Malis 2004; Mégret et al. 2008; Autheserre et al. 2009; Mégret et al. 2010), we notice that novel CGD algorithms can be derived by introducing incremental warps on both image and model sides simultaneously. Depending on the exact relationship between these incremental warps we define two novel types of composition: asymmetric and bidirectional.

The following subsections explain how to introduce the incremental warp into the cost function and how to update the current warp estimate for the four types of composition considered in this paper: (i) forward; (ii) inverse; (iii) asymmetric; and (v) bidirectional. These subsections will be derived using the non-regularized expression in Eq. 11 and the regularized expression in Eq. 14. Furthermore, to maintain consistency with the vector notation used through out the paper, we will abuse the notation and write the operations of warp composition9 W(x;p)W(x;Δp) and inversion1. W(x;q)-1 as simply pΔp and q-1 respectively.

Forward

In the forward compositional framework the incremental warp Δp is introduced on the image side at each iteration by composing it with the current warp estimate p. For the non-regularized case in Eq. 11 this leads to:

Δp=argminΔp12i[pΔp]-(a¯+Ac)2 26

Once the optimal values for the parameters of the incremental warp are obtained, the current warp estimate is updated according to the following compositional update rule:

ppΔp 27

On the other hand, using Eq. 14, forward composition can be expressed as:

Δp=argminΔp12σ2||i[pΔp]-(a¯+Ac)||2+12||p||Λ-12+||c||Σ-12 28

Because of the inclusion of the prior term over the shape parameters 12||p||Λ-12, we cannot update the current warp estimate using the update rule in Eq. 27. Instead, as noted by Papandreou and Maragos in Papandreou and Maragos (2008), we need to compute the forward compositional to forward additive parameter update Jacobian matrix JpR(4+n)×(4+n) 10. This matrix is used to map the forward compositional increment Δp to its first order additive equivalent JpΔp. In this case, the current estimate of the warp is computed using the following update rule:

graphic file with name 11263_2016_916_Equ29_HTML.gif 29

where H denotes the approximate or true Hessians of the residual ||i[pΔp]-(a+Ac)||2 with respect to the incremental parameters Δp and Δp itself is the optimal solution of the non-regularized problem in Eq. 26. Note that, in Sect. 3.3, we derive H for all the optimization methods studied in this paper.

Inverse

On the other hand, the inverse compositional framework inverts the roles of the image and the model by introducing the incremental warp on the model side. Using Eq. 11:

Δq=argminΔq12||i[p]-(a¯+Ac)[Δq]||2 30

Note that, in this case, the model is the one we seek to deform using the incremental warp.

Because the incremental warp is introduced on the model side, the solution Δq needs to be inverted before it is composed with the current warp estimate:

ppΔq-1 31

Simarly, using the regularized expression in Eq. 14, inverse compositon is expressed as:

Δq=argminΔq12σ2||i[p]-(a¯+Ac)[Δq]||2+12||p||Λ-12+||c||Σ-12 32

And the update of the current warp estimate is obtained using:

graphic file with name 11263_2016_916_Equ33_HTML.gif 33

where, in this case, Jq denotes the inverse compositional to forward additive parameter update Jacobian matrix JqR(4+n)×(4+n) originally derived by Papandreou and Maragos (2008).

Asymmetric

Asymmetric composition introduces two related incremental warps onto the cost function; one on the image side (forward) and the other on the model side (inverse). Using Eq. 11 this is expressed as:

Δp=argminΔp12||i[pαΔp]-(a¯+Ac)[βΔp-1]||2 34

Note that the previous two incremental warps are defined to be each others inverse. Consequently, using the first order approximation to warp inversion for typical AAMs warps Δp-1=-Δp defined in Matthews and Baker (2004), we can rewrite the previous asymmetric cost function as:

Δp=argminΔp12||i[pαΔp]-(a¯+Ac)[-βΔp||2 35

Although this cost function will need to be linearized around both incremental warps, the parameters Δp controlling both warps are the same. Also, note that the parameters α[0,1] and β=(1-α) control the relative contribution of both incremental warps in the computation of the optimal value for Δp.

In this case, the update rule for the current warp estimate is obtained by combining the previous forward and inverse compositional update rules into a single compositional update rule:

ppαΔpβΔp 36

In this case, using Eq. 14, asymmetric compositon is expressed as:

Δp=argminΔq12σ2||i[pαΔp]-(a¯+Ac)[-βΔp]||2+12||p||Λ-12+||c||Σ-12 37

And the current warp estimate is updates using:

pΛ-1+JpHJp-1-1JpHJp-1p+αJpΔp+βJpΔp 38

which reduces the forward update rule in Eq. 29 because α+β=1.

Note that, the special case in which α=β=0.5 is also referred to as symmetric composition (Mégret et al. 2008; Autheserre et al. 2009; Mégret et al. 2010) and that the previous forward and inverse compositions can also be obtained from asymmetric composition by setting α=1 , β=0 and α=0 , β=1 respectively.

Bidirectional

Similar to the previous asymmetric composition, bidirectional composition also introduces incremental warps on both image and model sides. However, in this case, the two incremental warps are assumed to be independent from each other. Based on Eq. 11:

Δp,Δq=argminΔp,Δq12||i[pΔp]-(a¯+Ac)[Δq]||2 39

Consequently, in Step 4, the cost function needs to be linearized around both incremental warps and solved with respect to the parameters controlling both warps, Δp and Δq.

Once the optimal value for both sets of parameters is recovered, the current estimate of the warp is updated using:

ppΔpΔq-1 40

For Eq. 14, bidirectional compositon is written as:

Δp,Δq=argminΔp,Δq12σ2||i[pΔp]-(a¯+Ac)[Δq]||2+12||p||Λ-12+||c||Σ-12 41

And, in this case, the update rule for the current warp estimate is:

pΛ-1+JpHJp+JqHJq-1-1JpHJp+JqHJq-1p+JpΔp+JqΔq 42

which reduces the forward update rule in Eq. 29 because α+β=1.

Optimization Method

Step 2 and 3 in CGD algorithms, i.e. linearizing the cost and solving for the incremental warp respectively, depend on the specific optimization method used by the algorithm. In this paper, we distinguish between three main optimization methods11: (i) Gauss-Newton (Boyd and Vandenberghe 2004; Matthews and Baker 2004; Gross et al. 2005; Martins et al. 2010; Papandreou and Maragos 2008; Tzimiropoulos and Pantic 2013; ii) Newton (Boyd and Vandenberghe 2004; Kossaifi et al. 2014); and (iii) Wiberg (Okatani and Deguchi 2006; Strelow 2012; Papandreou and Maragos 2008; Tzimiropoulos and Pantic 2013).

These methods can be used to iteratively solve the non-linear optimization problems defined by Eqs. 14 and 22. The main differences between them are:

  1. The term being linearized. Gauss-Newton and Wiberg linearize the residual r while Newton linearizes the whole data term D.

  2. The way in which each method solves for the incremental parameters Δc, Δp and Δq. Gauss-Newton and Newton can either solve for them simultaneously or in an alternated fashion while Wiberg defines its own procedure to solve for different sets of parameters12.

The following subsections thoroughly explain how the previous optimization methods are used in CGD algorithms. In order to simplify their comprehension full derivations will be given for all methods using the SSD data term (Eq. 11) with both asymmetric (Sect. 3.2.3) and bidirectional (Sect. 3.2.4) compositions13 while only direct solutions will be given for the Project-Out data term (Eq. 19). Note that, in Sect. 3.2, we already derived update rules for the regularized expression in Eq. 14 and, consequently, there is no need to consider regularization throughout this section.14

Gauss-Newton

When asymmetric composition is used, the optimization problem defined by the SSD data term is:

Δc,Δp=argminΔc,Δp12raTra 43

with the asymmetric residual ra defined as:

ra=i[pαΔp]-(a¯+A(c+Δc))[βΔp-1] 44

and where we have introduced the incremental appearance parameters Δc 15. The Gauss-Newton method solves the previous optimization problem by performing a first order Taylor expansion of the residual:

ra(Δ)r^a(Δ)ra+raΔΔ 45

and solving the following approximation of the original problem:

Δ=argminΔ12r^aTr^a 46

where, in order to unclutter the notation, we have defined Δ=(ΔcT,ΔpT)T and the partial derivative of the residual with respect to the previous parameters, i.e. the Jacobian of the residual, is defined as:

raΔ=raΔc,raΔp=-A,tWΔp=-A,Jt 47

where t=αi[p]+β(a¯+Ac).

When bidirectional composition is used, the optimization problem is defined as:

Δc,Δp,Δq=argminΔc,Δp,Δq12rbTrb 48

where the bidirectional residual rb reduces to:

rb=i[pΔp]-(a¯+A(c+Δc))[Δq] 49

The Gauss-Newton method proceeds in exactly the same manner as before, i.e. performing a first order Taylor expansion:

rb(Δ)r^b(Δ)rb+rbΔΔ 50

and solving the approximated problem:

Δ=argminΔ12r^bTr^b 51

where, in this case, Δ=(ΔcT,ΔpT,ΔqT)T and the Jacobian of the residual is defined as:

rbΔ=rbΔc,rbΔp,rbΔq=-A,Ji,-Ja 52

where Ji=i[p]WΔp and Ja=(a¯+Ac)WΔq.

Simultaneous

The optimization problem defined by Eqs. 46 and 51 can be solved with respect to all parameters simultaneously by simply equating their derivative to 0:

0=12r^Tr^Δ=12(r+rΔΔ)T(r+rΔΔ)Δ=r+rΔΔrΔT 53

The solution is given by:

Δ=-rΔTrΔ-1rΔTr 54

where rΔTrΔ is known as the Gauss-Newton approximation to the Hessian matrix.

Directly inverting rΔTrΔ has complexity16 O((n+m)3) for asymmetric composition and O((2n+m)3) for bidirectional composition. However, one can take advantage of the problem structure and derive an algorithm with smaller complexity by using the Schur complement 17 (Boyd and Vandenberghe 2004).

For asymmetric composition we have:

graphic file with name 11263_2016_916_Equ55_HTML.gif 55

Applying the Schur complement, the solution for Δp is given by:

-(JtTJt+JtTAATJtT)Δp=JtTr-JtTAATra-JtT(I-AAT)JtΔp=JtT(I-AAT)ra-JtTA¯JtΔp=JtTA¯raΔp=-JtTA¯Jt-1JtTA¯ra 56

and plugging the solution for Δp into Eq. 55 the optimal value for Δc is obtained by:

-Δc+ATJtΔp=-ATraΔc=ATra+JtΔp 57

Using the above procedure the complexity17 of solving each Gauss-Newton step is reduced to:

O(nmFJtTA¯+n2F+n3JtTA¯Jt-1) 58

Using bidirectional composition, we can apply the Schur complement either one or two times in order to take advantage of the 3×3 block structure of the matrix rbΔTrbΔ:

-rbΔTrbΔΔ=rbΔTrb-rbΔTrbΔΔcΔpΔq=-ATJiT-JaTrb 59

where

-rbΔTrbΔ=-ATAIATJi-ATJaJiTA-JiTJiJiTJa-JaTAJaTJi-JaTJa 60

Applying the Schur complement once, the combined solution for (ΔpT,ΔqT)T is given by:

-JiTA¯JiJiTA¯JaJaTA¯Ji-JaTA¯JaΔpΔq=JiTA¯-JaTA¯rbΔpΔq=-JiTA¯JiJiTA¯JaJaTA¯Ji-JaTA¯Ja-1JiTA¯-JaTA¯rb 61

Note that the complexity of inverting this new approximation to the Hessian matrix is O((2n)3).18 Similar to before, plugging the solutions for Δp and Δq into Eq. 60 we can infer the optimal value for Δc using:

Δc=ATrb-JiΔp+JaΔq 62

The total complexity per iteration of the previous approach is:

O(2nmFJiTA¯-JaTA¯+(2n)2F+(2n)3-JiTA¯JiJiTA¯JaJaTA¯Ji-JaTA¯Ja-1) 63

The Schur complement can be re-applied to Eq. 61 to derive a solution for Δq that only requires inverting a Hessian approximation matrix of size n×n:

JaTPJaΔq=JaTPrbΔq=JaTPJa-1JaTPrb 64

where we have defined the projection matrix P as:

P=A¯-A¯JiJiTA¯Ji-1JiTA¯ 65

and the solutions for Δp and Δc can be obtained by plugging the solutions for Δq into Eq. 61 and the solutions for Δq and Δp into Eq. 60 respectively:

Δp=-JiTA¯Ji-1JiTA¯rb-JaΔqΔc=ATrb+JiΔp-JaΔq 66

The total complexity per iteration of the previous approach reduces to:

O(2nmFJaTP&JiTA¯+2n2F+2n3JaTPJa-1&JiTA¯Ji-1) 67

Note that because of their reduced complexity, the solutions defined by Eqs. 64 and 66 are preferred over the ones defined by Eqs. 61 and 62.

Finally, the solutions using the Project-Out cost function are:

  • For asymmetric composition:
    Δp=-JtTA¯Jt-1JtTA¯r 68
    with complexity19 given by Eq. 58.
  • For bidirectional composition:
    Δq=Ja¯TPJa¯-1Ja¯TPrΔp=-JiTA¯Ji-1JiTA¯r-JaΔq 69
    with complexity20 given by Eq. 67.

where, in both cases, r=i[p]-a¯.

Alternated

Another way of solving optimization problems with two or more sets of variables is to use alternated optimization (De la Torre 2012). Hence, instead of solving the previous problem simultaneously with respect to all parameters, we can update one set of parameters at a time while keeping the other sets fixed.

More specifically, using asymmetric composition we can alternate between updating Δc given the previous Δp and then update Δp given the updated Δc in an alternate manner. Taking advantage of the structure of the problem defined by Eq. 55, we can obtain the following system of equations:

-Δc+ATJtΔp=-ATraJtTAΔc-JtTJtΔp=JtTra 70

which we can rewrite as:

Δc=ATra+JtΔpΔp=-JtTJt-1JtTra-AΔc 71

in order to obtain the analytical expression for the previous alternated update rules. The complexity at each iteration is dominated by:

O(n2F+n3(JtTJt)-1) 72

In the case of bidirectional composition we can proceed in two different ways: (a) update Δc given the previous Δp and Δq and then update (ΔpT,ΔqT)T from the updated Δc, or (b) update Δc given the previous Δp and Δq, then Δp given the updated Δc and the previous Δq and, finally, Δq given the updated Δc and Δp.

From Eq. 60, we can derive the following system of equations:

-Δc+ATJiΔp-ATJaΔq=-ATrbJiTAΔc-JiTJiΔp+JiTJaΔq=JiTrb-JaTAΔc+JaTJiΔp-JaTJaΔq=-JaTrb 73

from which we can define the alternated update rules for the first of the previous two options:

Δc=ATrb+JiΔp-JaΔqΔpΔq=-JiTJiJiTJaJaTJi-JaTJa-1JiT-JaTrb-AΔc 74

with complexity:

O((2n)2F+(2n)3-JiTJiJiTJaJaTJi-JaTJa-1) 75

The rules for the second option are:

Δc=ATrb+JiΔp-JaΔqΔp=-(JiTJi)-1JiTrb-AΔc-JaΔqΔq=(JaTJa)-1JaTrb-AΔc+JiΔp 76

and their complexity is dominated by:

O(2n2F+2n3(JiTJi)-1&(JaTJa)-1) 77

On the other hand, the alternated update rules using the Project-Out cost function are:

  • For asymmetric composition: There is no proper alternated rule because the Project-Out cost function only depends on one set of parameters, Δp.

  • For bidirectional composition:
    Δq=Ja¯TA¯Ja¯-1Ja¯TA¯r+JiΔpΔp=-JiTA¯Ji-1JiTA¯r-JaΔq 78
    with equivalent complexity to the one given by Eq. 58 because, in this case, the term Ja¯TA¯Ja¯-1 Ja¯TA¯ can be completely precomputed.

Note that all previous alternated update rules, Eqs. 717476 and 107, are similar but slightly different from their simultaneous counterparts, Eqs. 56 and 5761 and 6264 and 66, and 69.

Newton

The Newton method performs a second order Taylor expansion of the entire data term D:

D(Δ)D^(Δ)D+DΔΔ+12ΔT2DΔ2Δ 79

and solves the approximate problem:

Δ=argminΔD^ 80

Assuming asymmetric composition, the previous data term is defined as:

Da(Δ)=12raTra 81

and the matrix containing the first order partial derivatives with respect to the parameters, i.e. the data term’s Jacobian, can be written as:

DaΔ=DaΔc,DaΔp=-ATra,JtTra 82

On the other hand, the matrix 2DaΔ2 of the second order partial derivatives, i.e. the Hessian of the data term, takes the following form:

2DaΔ2=2DaΔc22DaΔcΔp2DaΔpΔc2DaΔp2=2DaΔc22DaΔcΔp2DaΔcΔpT2DaΔp2 83

Note that the Hessian matrix is, by definition, symmetric. The definition of its individual terms is provided in Appendix 2(a).

A similar derivation can be obtained for bidirectional composition where, as expected, the data term is defined as:

Db(Δ)=12rbTrb 84

In this case, the Jacobian matrix becomes:

DbΔ=DbΔc,DbΔp,DbΔq=-ATra,JiTra,-JaTra 85

and the Hessian matrix takes the following form:

2DbΔ2=2DbΔc22DbΔcΔp2DbΔcΔq2DbΔpΔc2DbΔp22DbΔpΔq2DbΔqΔc2DbΔqΔp2DbΔq2=2DbΔc22DbΔcΔp2DbΔcΔq2DbΔcΔpT2DbΔp22DbΔpΔq2DbΔcΔqT2DbΔpΔqT2DbΔq2 86

Notice that the previous matrix is again symmetric. The definition of its individual terms is provided in Appendix 2(a).

Simultaneous

Using the Newton method we can solve for all parameters simultaneously by equating the partial derivative of Eq. 80 to 0:

0=D^Δ=D+DΔΔ+12ΔT2D2ΔΔΔ=DΔ+2DΔ2Δ 87

with the solution given by:

Δ=-2DΔ2-1DΔ 88

Note that, similar to the Gauss-Newton method, the complexity of inverting the Hessian matrix 2DΔ2 is O((n+m)3) for asymmetric composition and O((2n+m)3) for bidirectional composition. As shown by Kossaifi et al. (2014)20, we can take advantage of the structure of the Hessian in Eqs. 83 and 86 and apply the Schur complement to obtain more efficient solutions.

The solutions for Δp and Δc using asymmetric composition are given by the following expressions:

Δp=2DaΔp2-2DaΔpΔc2DaΔcΔp-1DaΔp-2DaΔpΔcDaΔcΔc=DaΔc-2DaΔcΔpΔp 89

with complexity:

O(nmF2DaΔpΔc+n2m2DaΔpΔc2DaΔcΔp+2n2F2DaΔp2+n3H-1) 90

where we have defined H=2DaΔp2-2DaΔpΔc2DaΔcΔp-1 in order to unclutter the notation.

On the other hand, the solutions for bidirectional composition are given either by:

ΔpΔq=VWTWU-1vuΔc=DbΔc-2DbΔcΔpΔp-2DbΔcΔqΔq 91

or

Δq=U-WV-1WT-1u-WV-1vΔp=V-1v-WTΔqΔc=DbΔc-2DbΔcΔpΔp-2DbΔcΔqΔq 92

where we have defined the following auxiliary matrices

V=2DbΔp2-2DbΔpΔc2DbΔcΔpW=2DbΔqΔp-2DbΔqΔc2DbΔcΔpU=2DbΔq2-2DbΔqΔc2DbΔcΔq 93

and vectors

v=DbΔp-2DbΔpΔcDbΔcu=DbΔq-2DbΔqΔcDbΔc 94

The complexity of the previous solutions is of:

O(nmFv+2nmFu+4n2F+2n2mU&V+2n2F+n2mW+(2n)3VWTWU-1) 95

and

O(nmFv+2nmFu+4n2F+2n2mU&V+2n2F+n2mW+4n3V-1&U-WV-1WT-1) 96

respectively.

The solutions using the Project-Out cost function are:

  • For asymmetric composition:
    Δp=-WΔpT2tWΔpA¯r+JtTA¯Jt-1JtTA¯r 97
    with complexity21 given by Eq. 90.
  • For bidirectional composition:
    Δq=WΔpT2a¯WΔpA¯r+Ja¯TP~Ja¯-1Ja¯TP~rΔp=-Hi-1JiTA¯r-JaΔq 98
    where the projection operator P~ is defined as:
    P~=A¯-A¯JiTHi-1JiA¯T 99
    and where we have defined:
    Hi=WΔpT2i[p]WΔpA¯r+JiTA¯Ji 100
    to unclutter the notation. The complexity per iteration22 is given by Eq. 96.

Note that, the derivations of the previous solutions, for both types of composition, are analogous to the ones shown in Sect. 3.3.1 for the Gauss-Newton method and, consequently, have been omitted here.

Alternated

Alternated optimization rules can also be derived for the Newton method following the strategy shown in Sect. 3.3.1 for the Gauss-Newton case. Again, we will simply provide update rules and computational complexity for both types of composition and will omit the details of their full derivation.

For asymmetric composition the alternated rules are defined as:

Δc=DaΔc-2DaΔcΔpΔpΔp=2DaΔp2-1DaΔp-2DaΔpΔcΔc 101

with complexity:

O(nmF2DaΔpΔc+2n2F+n32DaΔp2-1) 102

The alternated rules for bidirectional composition case are given either by:

Δc=DbΔc-2DbΔcΔpΔp-2DbΔcΔqΔqΔpΔq=2DbΔp22DbΔpΔq2DbΔqΔp2DbΔp2-1DbΔp-2DbΔpΔcΔcDbΔq-2DbΔqΔcΔc 103

with complexity:

O(nmF2DΔpΔp+4n2F2DΔp2&2DΔq2+(2n)32DbΔp22DbΔpΔq2DbΔqΔp2DbΔp2-1) 104

or:

Δc=DbΔc-2DbΔcΔpΔp-2DbΔcΔqΔqΔp=2DbΔp2-1DbΔp-2DbΔpΔcΔc-2DbΔpΔqΔqΔq=2DbΔq2-1DbΔq-2DbΔqΔcΔc-2DbΔqΔpΔp 105

with complexity:

O(nmF2DΔpΔp+4n2F2DΔp2&2DΔq2+2n32DbΔp2-1&2DbΔq2-1) 106

On the other hand, the alternated update rules for the Newton method using the project-out cost function are:

  • For asymmetric composition: Again, there is no proper alternated rule because the project-out cost function only depends on one set of parameters, Δp.

  • For bidirectional composition:
    Δq=Ha-1Ja¯TA¯r+JiΔpΔp=-Hi-1JiTA¯r-JaΔq 107
    where we have defined:
    Ha=WΔpT2a¯WΔpA¯r+Ja¯TA¯Ja¯ 108
    and the complexity at every iteration is given by the following expression complexity:
    O(nmFJiTA¯+3n2F+2n3Hi-1&Ha-1) 109

Note that Newton algorithms are true second order optimizations algorithms with respect to the incremental warps. However, as shown in this section, this property comes at expenses of a significant increase in computational complexity with respect to (first order) Gauss-Newton algorithms. In Appendix 1, we show that some of the Gauss-Newton algorithms derived in Sect. 3.3.1, i.e. the Asymmetric Gauss-Newton algorithms, are, in fact, true Efficient Second order Minimization (ESM) algorithms that effectively circumvent thie previous increase in computational complexity.

Wiberg

The idea behind the Wiberg method is similar to the one used by the alternated Gauss-Newton method in Sect. 3.3.1, i.e. solving for one set of parameters at a time while keeping the other sets fixed. However, Wiberg does so by rewriting the asymmetric ra(Δc,Δp) and bidirectional rb(Δc,Δp,Δq) residuals as functions that only depend on Δp and Δq respectively.

For asymmetric composition, the residual r¯a(Δp) is defined as follows:

r¯a(Δp)=ra(Δc¯,Δp)=i[pαΔp]-(a¯+A(c+Δc¯a))[βΔp] 110

where the function Δc¯a(Δp) is obtained by solving for Δc while keeping Δp fixed:

Δc¯a(Δp)=ATra 111

Given the previous residual, the Wiberg method proceeds to define the following optimization problem with respect to Δp:

Δp=argminΔpr¯aTr¯a 112

which then solves approximately by performing a first order Taylor of the residual around the incremental warp:

Δp=argminΔpr¯a(Δp)+r¯aΔpΔp2 113

In this case, the Jacobian r¯Δp can be obtain by direct application of the chain rule and it is defined as follows:

dr¯adΔp=r¯aΔp+r¯aΔc¯aΔc¯aΔp=Jt-AATJt=A¯Jt 114

The solution for Δp is obtained as usual by equating the derivative of 112 with respect to Δp to 0:

Δp=-A¯JtTA¯Jt-1A¯JtTr¯a=-JtTA¯Jt-1JtTA¯r¯a 115

where we have used the fact that the matrix A¯ is idempotent22.

Therefore, the Wiberg method solves explicitly, at each iteration, for Δp using the previous expression and implicitly for Δc (through Δc¯a(Δp)) using Eq. 111. The complexity per iteration of the Wiberg method is the same as the one of the Gauss-Newton method after applying the Schur complement, Eq. 58. In fact, note that the Wiberg solution for Δp (Eq. 115) is the same as the one of the Gauss-Newton method after applying the Schur complement, Eq. 56; and also note the similarity between the solutions for Δc of both methods, Eqs. 111 and 57. Finally, note that, due to the close relation between the Wiberg and Gauss-Newton methods, Asymmetric Wiberg algorithms are also ESM algorithms for fitting AAMs.

On the other hand, for bidirectional composition, the residual r¯b(Δp) is defined as:

r¯b(Δq)=rb(Δc¯b,Δp¯b,Δq)=i[pΔp¯b]-(a¯-A(c+Δc¯b))[Δq] 116

where, similarly as before, the function Δc¯b(Δp,Δq) is obtained solving for Δc while keeping both Δp and Δq fixed:

Δc¯b(Δp,Δq)=ATrb 117

and the function Δp¯b(Δc¯b,Δq) is obtained by solving for Δp using the Wiberg method while keeping Δq fixed:

Δp¯b(Δc¯b,Δq)=-JiTA¯Ji-1JiTA¯r¯b 118

At this point, the Wiberg method proceeds to define the following optimization problem with respect to Δq:

Δq=argminΔqr¯bTr¯b 119

which, as before, then solves approximately by performing a first order Taylor expansion around Δq:

Δq=argminΔqr¯b(Δq)+r¯bΔqΔq2 120

In this case, the Jacobian of the residual can also be obtained by direct application of the chain rule and takes the following form:

dr¯bdΔq=r¯bΔq+r¯bΔp¯bΔp¯bΔq+r¯bΔc¯b+r¯bΔp¯bΔp¯bΔc¯Δc¯bΔq=-Ja+JiJiTA¯Ji-1JiTA¯Ja+A-JiJiTA¯Ji-1JiTA¯AATJa=-Ja+AATJa+JiJiTA¯Ji-1JiTA¯Ja-JiJiTA¯Ji-1JiTA¯AATJa=-I-AATJa+JiJiTA¯Ji-1JiTA¯I-AATJa=-A¯Ja+JiJiTA¯Ji-1JiTA¯A¯Ja=-I+JiJiTA¯Ji-1JiTA¯A¯Ja=-PJa 121

And, again, the solution for Δq is obtained as usual by equating the derivative of 120 with respect to Δq to 0:

Δq=PJtTPJt-1PJtTr¯a 122

In this case, the Wiberg method solves explicitly, at each iteration, for Δp using the previous expression and implicitly for Δp and Δc (through Δp¯b(Δc¯b,Δq) and Δc¯b(Δp,Δq)) using Eqs. 118 and 117 respectively. Again, the complexity per iteration is the same as the one of the Gauss-Newton method after applying the Schur complement, Eq. 67; and the solutions for both methods are almost identical, Eqs. 122118 and 117 and Eqs. 6162 and 64.

On the other hand, the Wiberg solutions for the project-out cost function are:

  • For asymmetric composition: Because the project-out cost function only depends on one set of parameters, Δp, in this case Wiberg reduces to Gauss-Newton.

  • For bidirectional composition:
    Δp=-JiTA¯Ji-1JiTA¯rΔq=Ja¯TPJa¯-1Ja¯TPr 123
    Again, in this case, the solutions obtained with the Wiberg method are almost identical to the ones obtained using Gauss-Newton after applying the Schur complement, Eq. 69.

Relation to Prior Work

In this section we relate relevant prior work on CGD algorithms for fitting AAMs (Matthews and Baker 2004; Gross et al. 2005; Papandreou and Maragos 2008; Amberg et al. 2009; Martins et al. 2010; Tzimiropoulos and Pantic 2013; Kossaifi et al. 2014) to the unified and complete view introduced in the previous section.

Project-Out algorithms

In their seminal work (2004), Matthews and Baker proposed the first CGD algorithm for fitting AAMs, the so-called Project-out Inverse Compositional (PIC) algorithm. This algorithm uses Gauss-Newton to solve the optimization problem posed by the project-out cost function using inverse composition. The use of the project-out norm removes the need to solve for the appearance parameters and the use of inverse composition allows for the precomputation of the pseudo-inverse of the Jacobian with respect to Δp, i.e. Ja¯TA¯Ja¯-1Ja¯A¯. The PIC algorithm is very efficient (O(nF)) but it has been shown to perform poorly in generic and unconstrained scenarios (Gross et al. 2005; Papandreou and Maragos 2008). In this paper, we refer to this algorithm as the Project-Out Inverse Gauss-Newton algorithm.

The forward version of the previous algorithm, i.e. the Project-Out Forward Gauss-Newton algorithm, was proposed by Amberg et al. in 2009. In this case, the use of forward composition prevents the precomputation of the Jacobian pseudo-inverse and its complexity increases to O(nmF+n2F+n3). However, this algorithm has been shown to largely outperform its inverse counterpart, and obtains good performance under generic and unconstrained conditions (Amberg et al. 2009; Tzimiropoulos and Pantic 2013).23

To the best of our knowledge, the rest of Project-Out algorithms derived in Sect. 3, i.e.:

  • Project-Out Forward Newton

  • Project-Out Inverse Newton

  • Project-Out Asymmetric Gauss-Newton

  • Project-Out Asymmetric Newton

  • Project-Out Bidirectional Gauss-Newton Schur

  • Project-Out Bidirectional Gauss-Newton Alternated

  • Project-Out Bidirectional Newton Schur

  • Project-Out Bidirectional Newton Alternated

  • Project-Out Bidirectional Wiberg

have never been published before and are a significant contribution of this work.

SSD algorithms

In Gross et al. (2005) Gross et al. presented the Simultaneous Inverse Compositional (SIC) algorithm and show that it largely outperforms the Project-Out Inverse Gauss-Newton algorithm in terms of fitting accuracy. This algorithm uses Gauss-Newton to solve the optimization problem posed by the SSD cost function using inverse composition. In this case, the Jacobian with respect to Δp, depends on the current value of the appearance parameters and needs to be recomputed at every iteration. Moreover, the inclusion of the Jacobian with respect to the appearance increments δc, increases the size of the simultaneous Jacobian to rΔ=-A,-JaRF×(m+n) and, consequently, the computational cost per iteration of the algorithm is O((m+n)2F+(m+n)3).

As we shown in Sections 3.3.13.3.1 and 3.3.3 the previous complexity can be dramatically reduced by taking advantage of the problem structure in order to derive more efficient and exact algorithm by: (a) applying the Schur complement; (b) adopting an alternated optimization approach; or (c) or using the Wiberg method. Papandreou and Maragos (2008) proposed an algorithm that is equivalent to the solution obtained by applying the Schur complement to the problem, as described in Sect. 3.3.1. The same algorithm was reintroduced in Tzimiropoulos and Pantic (2013) using a somehow ad-hoc derivation (reminiscent of the Wiberg method) under the name Fast-SIC. This algorithm has a computational cost per iteration of O(nmF+n2F+n3). In this paper, following our unified view on CGD algorithm, we refer to the previous algorithm as the SSD Inverse Gauss-Newton Schur algorithm. The alternated optimization approach was used in Tzimiropoulos et al. (2012) and Antonakos et al. (2014) with complexity O(n2F+n3) per iteration. We refer to it as the SSD Inverse Gauss-Newton Alternated algorithm.

On the other hand, the forward version of the previous algorithm was first proposed by Martins et al. in (2010).24 In this case, the Jacobian with respect to Δp depends on the current value of the shape parameters p through the warped image i[p] and also needs to be recomputed at every iteration. Consequently, the complexity if the algorithm is the same as in the naive inverse approach of Gross et al. In this paper, we refer to this algorithm as the SSD Forward Gauss-Newton algorithm. It is important to notice that Tzimiropoulos and Pantic (2013) derived a more efficient version of this algorithm (O(nmF+n2F+n3)), coined Fast-Forward, by applying the same derivation used to obtain their Fast-SIC algorithm. They showed that in the forward case their derivation removed the need to explicitly solve for the appearance parameters. Their algorithm is equivalent to the previous Project-Out Forward Gauss-Newton.

Finally, Kossaifi et al. derived the SSD Inverse Newton Schur algorithm in Kossaifi et al. (2014). This algorithm has a total complexity per iteration of O(nmF+n2m+2n2F+n3) and was shown to slightly underperform its equivalent Gauss-Newton counterpart.

The remaining SSD algorithms derived in Sect. 3, i.e.:

  • SSD Inverse Wiberg

  • SSD Forward Gauss-Newton Alternated

  • SSD Forward Newton Schur

  • SSD Forward Newton Alternated

  • SSD Forward Wiberg

  • SSD Asymmetric Gauss-Newton Schur

  • SSD Asymmetric Gauss-Newton Alternated

  • SSD Asymmetric Newton Schur

  • SSD Asymmetric Newton Alternated

  • SSD Asymmetric Wiberg

  • SSD Bidirectional Gauss-Newton Schur

  • SSD Bidirectional Gauss-Newton Alternated

  • SSD Bidirectional Newton Schur

  • SSD Bidirectional Newton Alternated

  • SSD Bidirectional Wiberg

have never been published before and are also a key contribution of the presented work.

Note that the iterative solutions of all CGD algorithms studied in this paper are given in Appendix 3.

Experiments

In this section, we analyze the performance of the CGD algorithms derived in Sect. 3 on the specific problems of non-rigid face alignment in-the-wild. Results for five experiments are reported. The first experiment compares the fitting accuracy and convergence properties of all algorithms on the test set of the popular Labelled Faces Parts in-the-Wild (LFPW) (Belhumeur et al. 2011) database. The second experiment quantifies the importance of the two terms in the Bayesian project-out cost function in relation to the fitting accuracy obtained by Project-Out algorithms. In the third experiment, we study the effect that varying the value of the parameters α and β has on the performance of Asymmetric algorithms. The fourth experiment explores the effect of optimizing the cost functions using reduced subsets of the total number of pixels (Fig. 3) and quantifies the impact that this has on the accuracy and computational efficiency of CGD algorithms. Finally, in the fifth experiment, we report the performance of the most accurate CGD algorithms on the test set of the Helen (Le et al. 2012) database and on the entire Annotated Faces in-the-Wild (AFW) (Zhu and Ramanan 2012) database.

Fig. 3.

Fig. 3

Subset of pixels on the reference frame used to optimize the SSD and Project-Out cost functions for different sampling rates. a 100%, b 50%, c 25%, d 12%

Throughout this section, we abbreviate CGD algorithms using the following convention: CF_TC_OM(_OS) where: (a) CF stands for Cost Function and can be either SSD or PO depending on whether the algorithm uses the Sum of Squared Differences or the Project Out cost function; (b) TC stands for Type of Composition and can be For, Inv, Asy or Bid depending on whether the algorithm uses Forward, Inverse, Asymmetric or Bidirectional compositions; (c) OM stands for Optimization Method and can be GN, N or W depending on whether the algorithm uses the Gauss-Newton, Newton or Wiberg optimization methods; and, finally, (d) if Gauss-Newton or Newton methods are used, the optional field OS, which stands for Optimization Strategy, can be Sch or Alt depending on whether the algorithm solves for the parameters simultaneously using the Schur complement or using Alternated optimization. For example, following the previous convention the Project Out Bidirectional Gauss-Newton Schur algorithm is denoted by PO_Bid_GN_Sch.

Landmark annotations for all databases are provided by the iBUG group25 (Sagonas et al. 2013a, b) and fitting accuracy is reported using the point-to-point error measure normalized by the face size 26 proposed in Zhu and Ramanan (2012) over the 49 interior points of the iBug annotation scheme.

In all face alignment experiments, we use a single AAM, trained using the 800 and 2000 training images of the LFPW and Helen databases. Similar to Tzimiropoulos and Pantic (2014), we use a modified version of the Dense Scale Invariant Feature Transform (DSIFT) (Lowe 1999; Dalal and Triggs 2005) to define the appearance representation of the previous AAM. In particular, we describe each pixel with a reduced SIFT descriptor of length 8 using the public implementation provided by the authors of Vedaldi and Fulkerson (2010). All algorithms are implemented in a coarse to fine manner using a Gaussian pyramid with 2 levels (face images are normalized to a face size 27 of roughly 150 pixels at the top level). In all experiments, we optimize over 7 shape parameters (4 similarity transform and 3 non-rigid shape parameters) at the first pyramid level and over 16 shape parameters (4 similarity transform and 12 non-rigid shape parameters) at the second one. The dimensionality of the appearance models is kept to represent 75% of the total variance in both levels. This results in 225 and 280 appearance parameters at the first and second pyramid levels respectively. The previous choices were determined by testing on a small hold out set of the training data.

In all experiments, algorithms are initialized by perturbing the similarity transform that perfectly aligns the model’s mean shape (a frontal pose and neutral expression looking shape) with the ground truth shape of each image. These transforms are perturbed by adding uniformly distributed random noise to their scale, rotation and translation parameters. Exemplar initializations obtained by this procedure for different amounts of noise are shown in Fig. 4. Notice that we found that initializing using 5% uniform noise is (statistically) equivalent to initializing with the popular OpenCV (Bradski 2000) implementation of the well-known Viola and Jones face detector (Viola and Jones 2001) on the test images of the LFPW database.

Fig. 4.

Fig. 4

Exemplar initializations obtained by varying the percentage of uniform noise added to the similarity parameters. Note that, increasing the percentage of noise produces more challenging initialization a 0%, b 2.5%, c 5%, d 7.5%. e 10%

Unless stated otherwise: (i) algorithms are initialized with 5% uniform noise (ii) test images are fitted three times using different random initializations (the same exact random initializations are used for all algorithms); (iii) algorithms are left to run for 40 iterations (24 iterations at the first pyramid level and 16 at the second); (iv) results for Project-Out algorithms are obtained using the Bayesian project-out cost function defined by Eq. 22; and (v) results for Asymmetric algorithms are reported for the special case of symmetric composition i.e. α=β=0.5 in Eq. 34.

Finally, in order to encourage open research and facilitate future comparisons with the results presented in this section, we make the implementation of all algorithms publicly available as part of the Menpo Project1 (Alabort-i-Medina et al. 2014).

Comparison on LFPW

In this experiment, we report the fitting accuracy and convergence properties of all CGD algorithms studied in this paper. Results are reported on the 220 test images of the LFPW database. In order to keep the information easily readable and interpretable, we group algorithms by cost function (i.e. SSD or Project-Out), and optimization method (i.e. Gauss-Newton, Newton or Wiberg).

Results for this experiment are reported in Figs. 56789 and 10. These figures have all the same structure and are composed of four figures and a table. Figs. 5a, 6a, 7a, 8a, 9a and 10a report the Cumulative Error Distribution (CED), i.e the proportion of images versus normalized point-to-point error for each of the algorithms’ groups. Figures 5e, 6e, 7e, 8e, 9e, and 10e summarize and complete the information on the previous CEDs by stating the proportion of images fitted with a normalized point-to-point error smaller than 0.02, 0.03 and 0.04; and by stating the mean, std and median of the final normalized point-to-point error as well as the approximate run-time. The aim of the previous figures and tables is to help us compare the final fitting accuracy obtained by each algorithm. On the other hand, Figs. 5b, 6b, 7b, 8b, 9b and 10b report the mean normalized point-to-point error at each iteration while Figs. 5c, 5d, 6c, 6d, 7c, 7d, 8c, 8d, 9c, 9d and 10c, 10d report the mean normalized cost at each iteration.27 The aim of these figures is to help us compare the convergence properties of every algorithm.

Fig. 5.

Fig. 5

Results showing the fitting accuracy and convergence properties of the SSD Gauss-Newton algorithms on the LFPW test dataset initialized with 5% uniform noise. a CED on the LFPW test dataset for all SSD Gauss-Newton algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations on the LFPW test dataset for all SSD Gauss-Newton algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations on the LFPW test dataset for all SSD Gauss-Newton algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations on the LFPW test dataset for all SSD Gauss-Newton algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error mean, std and median for all SSD Gauss-Newton algorithms initialized with 5% uniform noise

Fig. 6.

Fig. 6

Results showing the fitting accuracy and convergence properties of the SSD Newton algorithms on the LFPW test dataset initialized with 5% uniform noise. a Cumulative error distribution on the LFPW test dataset for all SSD Newton algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations on the LFPW test dataset for all SSD Newton algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations on the LFPW test dataset for all SSD Newton algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations on the LFPW test dataset for all SSD Newton algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error Mean, Std and Median for all SSD Newton algorithms initialized with 5% uniform noise

Fig. 7.

Fig. 7

Results showing the fitting accuracy and convergence properties of the SSD Wiberg algorithms on the LFPW test dataset. a CED on the LFPW test dataset for all SSD Wiberg algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations on the LFPW test dataset for all SSD Wiberg algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations on the LFPW test dataset for all SSD Wiberg algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations on the LFPW test dataset for all SSD Wiberg algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error mean, std and median for all SSD Wiberg algorithms initialized with 5% uniform noise

Fig. 8.

Fig. 8

Results showing the fitting accuracy and convergence properties of the Project-Out Gauss-Newton algorithms on the LFPW test dataset. a CED graph on the LFPW test dataset for all Project-Out Gauss-Newton algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations on the LFPW test dataset for all Project-Out Gauss-Newton algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations on the LFPW test dataset for all Project-Out Gauss-Newton algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations on the LFPW test dataset for all Project-Out Gauss-Newton algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error mean, std and median for all Project-Out Gauss-Newton algorithms initialized with 5% uniform noise

Fig. 9.

Fig. 9

Results showing the fitting accuracy and convergence properties of the Project-Out Newton algorithms on the LFPW test dataset. a CED graph on the LFPW test dataset for all Project-Out Newton algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations on the LFPW test dataset for all Project-Out Newton algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations on the LFPW test dataset for all Project-Out Newton algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations on the LFPW test dataset for all Project-Out Newton algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error mean, std and median for all Project-Out Newton algorithms initialized with 5% uniform noise

Fig. 10.

Fig. 10

Results showing the fitting accuracy and convergence properties of the Project-Out Wiberg algorithms on the LFPW test dataset. a Cumulative Error Distribution graph on the LFPW test dataset for all Project-Out Wiberg algorithms initialized with 5% uniform noise. b Mean normalized point-to-point error versus number of iterations graph on the LFPW test dataset for all Project-Out Wiberg algorithms initialized with 5% uniform noise. c Mean normalized cost versus number of first scale iterations graph on the LFPW test dataset for all Project-Out Wiberg algorithms initialized with 5% uniform noise. d Mean normalized cost versus number of second scale iterations graph on the LFPW test dataset for all Project-Out Wiberg algorithms initialized with 5% uniform noise. e Table showing the proportion of images fitted with a normalized point-to-point error below 0.02, 0.03 and 0.04 together with the normalized point-to-point error mean, std and median for all Project-Out Wiberg algorithms initialized with 5% uniform noise

SSD Gauss-Newton algorithms

Results for SSD Gauss-Newton algorithms are reported in Fig. 5. We can observe that Inverse, Asymmetric and Bidirectional algorithms obtain a similar performance and significantly outperform Forward algorithms in terms of fitting accuracy, Fig. 5a, e. In absolute terms, Bidirectional algorithms slightly outperform Inverse and Asymmetric algorithms. On the other hand, the difference in performance between the Simultaneous Schur and Alternated optimizations strategies are minimal for all algorithms and they were found to have no statistical significance.

Looking at Figures 5b–d there seems to be a clear (and obviously expected) correlation between the normalized point-to-point error and the normalized value of the cost function at each iteration. In terms of convergence, it can be seen that Forward algorithms converge slower than Inverse, Asymmetric and Bidirectional. Bidirectional algorithms converge slightly faster than Inverse algorithms and these slightly faster than Asymmetric algorithms. In this case, the Simultaneous Schur optimization strategy seems to converge slightly faster than the Alternated one for all SSD Gauss-Newton algorithms.

SSD Newton algorithms

Results for SSD Newton algorithms are reported on Fig. 6. In this case, we can observe that the fitting performance of all algorithms decreases with respect to their Gauss-Newton counterparts Fig. 6a, e. This is most noticeable in the case of Forward algorithms for which there is 20% drop in the proportion of images fitted below 0.02, 0.03 and 0.04 with respect to its Gauss-Newton equivalents. For these algorithms there is also a significant increase in the mean and median of the normalized point-to-point error. Asymmetric Newton algorithms also perform considerably worse, between 5% and 10%, than their Gauss-Newton versions. The drop in performance is reduced for Inverse and Bidirectional Newton algorithms for which accuracy is only reduced by around 3% with respect their Gauss-Newton equivalent.

Within Newton algorithms, there are clear differences in terms of speed of convergence Fig. 6b–d. Bidirectional algorithms are the fastest to converge followed by Inverse and Asymmetric algorithms, in this order, and lastly Forward algorithms. In this case, the Simultaneous Schur optimization strategy seems to converge again slightly faster than the Alternated one for all algorithms but Bidirectional algorithms, for which the Alternated strategy converges slightly faster. Overall, SSD Newton algorithms converge slower than SSD Gauss-Newton algorithms.

SSD Wiberg algorithms

Results for SSD Wiberg algorithms are reported on Fig. 7. Figure 7a–e show that these results are (as one would expect) virtually equivalent to those obtained by their Gauss-Newton counterparts.

Project-Out Gauss-Newton algorithms

Results for Project-Out Gauss-Newton algorithms are reported on Fig. 8. We can observe that, there is significant drop in terms of fitting accuracy for Inverse and Bidirectional algorithms with respect to their SSD versions, Fig. 8a, e. As expected, the Forward algorithm achieves virtually the same results as its SSD counterpart. The Asymmetric algorithm obtains similar accuracy to that of the best performing SSD algorithms.

Looking at Figures 8b–d we can see that Inverse and Bidirectional algorithms converge slightly faster than the Asymmetric algorithm. However, the Asymmetric algorithm ends up descending to a significant lower value of the mean normalized cost which also translates to a lower value for the final mean normalized point-to-point error. Similar to SSD algorithms, the Forward algorithm is the worst convergent algorithm.

Finally, notice that, in this case, there is virtually no difference, in terms of both final fitting accuracy and speed of convergence, between the Simultaneous Schur and Alternated optimizations strategies used by the Bidirectional algorithm.

Project-Out Newton algorithms

Results for Project-Out Newton algorithms are reported on Fig. 9. It can be clearly seen that Project-Out Newton algorithms perform much worse than their Gauss-Newton and SSD counterparts. The final fitting accuracy obtained by these algorithms is very poor compared to the one obtained by the best SSD and Project-Out Gauss-Newton algorithms, Fig. 9a, e. In fact, by looking at Fig. 9b–d only the Forward and Asymmetric algorithms seem to be stable at the second level of the Gaussian pyramid with Inverse and Bidirectional algorithms completely diverging for some of the images as shown by the large mean and std of their final normalized point-to-point errors.

Project-Out Wiberg algorithms

Results for the Project-Out Bidirectional Wiberg algorithm are reported on Fig. 10. As expected, the results are virtually identical to those of the obtained by Project-Out Bidirectional Gauss-Newton algorithms.

Weighted Bayesian project-out

In this experiment, we quantify the importance of each of the two terms in our Bayesian project-out cost function, Eq. 22. To this end, we introduce the parameters, ρ[0,1] and γ=1-ρ, to weight up the relative contribution of both terms:

ρ||i[p]-a¯||AD-1AT2+γσ2||i[p]-a¯||A¯2 124

Setting ρ=0, γ=1 reduces the previous cost function to the original project-out loss proposed in Matthews and Baker (2004); completely disregarding the contribution of the prior distribution over the appearance parameters i.e the Mahalanobis distance within the appearance subspace. On the contrary, setting ρ=1, γ=0 reduces the cost function to the first term; completely disregarding the contribution of the project-out term i.e. the distance to the appearance subspace. Finally setting ρ=γ=0.5 leads to the standard Bayesian project-out cost function proposed in Sect. 3.1.2.

In order to assess the impact that each term has on the fitting accuracy obtained by the previous Project-Out algorithm we repeat the experimental set up of the first experiment and test all Project-Out Gauss-Newton algorithms for different values of the parameters ρ=1-γ. Notice that, in this case, we only report the performance of Gauss-Newton algorithms because they were shown to vastly outperform Newton algorithms and to be virtually equivalent to Wiberg algorithms in the first experiment.

Results for this experiment are reported by Fig. 11. We can see that, regardless of the type of composition, a weighted combination of the two previous terms always leads to a smaller mean normalized point-to-point error compared to either term on its own. Note that the final fitting accuracy obtained with the standard Bayesian project-out cost function is substantially better than the one obtained by the original project-out loss (this is specially noticeable for the Inverse and Bidirectional algorithms); fully justifying the inclusion of the first term, i.e the Mahalanobis distance within the appearance subspace, into the cost function. Finally, in this particular experiment, the final fitting accuracy of all algorithms is maximized by setting ρ=0.1, γ=0.9, further highlighting the importance of the first term in the Bayesian formulation.

Fig. 11.

Fig. 11

Results quantifying the effect of varying the value of the parameters ρ=1-γ in Project-Out Gauss-Newton algorithms. a Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the Project-Out and SSD Asymmetric Gauss-Newton algorithms for different values of ρ=1-γ and initialized with 5% noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. b CED on the LFPW test dataset for Project-Out Forward Gauss-Newton algorithms for different values of ρ=1-γ and initialized with 5% noise. c CED on the LFPW test dataset for Project-Out Inverse Gauss-Newton algorithms for different values of ρ=1-γ and initialized with 5% noise. d CED on the LFPW test dataset for Project-Out Asymmetric Gauss-Newton algorithms for different values of ρ=1-γ and initialized with 5% noise. e CED on the LFPW test dataset for Project-Out Bidirectional Gauss-Newton algorithms for different values of ρ=1-γ and initialized with 5% noise (Color figure online)

Optimal asymmetric composition

This experiment quantifies the effect that varying the value of the parameters α[0,1] and β=1-α in Eq. 34 has in the fitting accuracy obtained by the Asymmetric algorithms. Note that for α=1, β=0 and α=0, β=1 these algorithms reduce to their Forward and Inverse versions respectively. Recall that, in previous experiments, we used the Symmetric case α=β=0.5 to generate the results reported for Asymmetric algorithms. Again, we only report performance for Gauss-Newton algorithms.

We again repeat the experimental set up described in the first experiments and report the fitting accuracy obtained by the Project Out and SSD Asymmetric Gauss-Newton algorithms for different values of the parameters α=1-β. Results are shown in Fig. 12. For the BPO Asymmetric algorithm, the best results are obtain by setting α=0.4, β=0.6, Figs. 12a (top) and  12b. These results slightly outperform those obtain by the default Symmetric algorithm and this particular configuration of the BPO Asymmetric algorithm is the best performing one on the LFPW test dataset. For the SSD Asymmetric Gauss-Newton algorithm the best results are obtained by setting α=0.2, β=0.8, Figs. 12a (bottom) and 12c. In this case, the boost in performance with respect to the default Symmetric algorithm is significant and, with this particular configuration, the SSD Asymmetric Gauss-Newton algorithm is the best performing SSD algorithm on the LFPW test dataset, outperforming Inverse and Bidirectional algorithms.

Fig. 12.

Fig. 12

Results quantifying the effect of varying the value of the parameters α=1-β in Asymmetric algorithms. a Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the Project-Out and SSD Asymmetric Gauss-Newton algorithms for different values of α=1-β and initialized with 5% noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. b CED on the LFPW test dataset for Project-Out Asymmetric Gauss-Newton algorithm for different values of α=1-β and initialized with 5% noise. c CED on the LFPW test dataset for the the SSD Asymmetric Gauss-Newton algorithm for different values of α=1-β and initialized with 5% noise (Color figure online)

Sampling and Number of Iterations

In this experiment, we explore two different strategies to reduce the running time of the previous CGD algorithms.

The first one consists of optimizing the SSD and Project-Out cost functions using only a subset of all pixels in the reference frame. In AAMs the total number of pixels on the reference frame, F, is typically several orders of magnitude bigger than the number of shape, n, and appearance, m, components i.e. F>>m>>n. Therefore, a significant reduction in the complexity (and running time) of CGD algorithms can be obtained by decreasing the number of pixels that are used to optimize the previous cost functions. To this end, we compare the accuracy obtained by using 100, 50, 25 and 12% of the total number of pixels on the reference frame. Note that pixels are (approximately) evenly sampled across the reference frame in all cases, Fig. 3.

The second strategy consists of simply reducing the number of iterations that each algorithm is run. Based on the figures used to assess the convergence properties of CGD algorithms in previous experiments, we compare the accuracy obtained by running the algorithms for 40 (24+16) and 20 (12+8) iterations.

Note that, in order to further highlight the advantages and disadvantages of using the previous strategies, we report the fitting accuracy obtained by initializing the algorithms using different amounts of uniform noise.

Once more we repeat the experimental set up of the first experiment and report the fitting accuracy obtained by the Project Out and SSD Asymmetric Gauss-Newton algorithms. Results for this experiment are shown in Fig. 13. It can be seen that reducing the number of pixels up to 25% while maintaining the original number of iterations to 40 (24+16) has little impact on the fitting accuracy achieved by both algorithms while reducing them to 12% has a clear negative impact, Fig. 13a, b. Also, performance seems to be consistent along the amount of noise. In terms of run time, Fig. 13c, reducing the number of pixels to 50, 25 and 12% offers speed ups of 2.0x, 2.9x and 3.7x for the BPO algorithm and of 1.8x, 2.6x and 2.8x for the SSD algorithm respectively.

Fig. 13.

Fig. 13

Results assessing the effectiveness of sampling for the best performing Project-Out and SSD algorithms on the LFPW database. a Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the SSD Asymmetric Gauss-Newton algorithm using different sampling rates, 40 (24+16) iterations, and initialized with different amounts of noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. b Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the Project-Out Asymmetric Gauss-Newton algorithm using different sampling rates, 40 (24+16) iterations, and initialized with different amounts of noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. c Table showing run time of each algorithm for different amounts of sampling and 40 (24+16) iterations. d Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the Project-Out Asymmetric Gauss-Newton algorithm using different sampling rates, 20 (12+8) iterations, and initialized with different amounts of noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. e Proportion of images with normalized point-to-point errors smaller than 0.02, 0.03 and 0.04 for the SSD Asymmetric Gauss-Newton algorithm using different sampling rates, 20 (12+8) iterations, and initialized with different amounts of noise. Colors encode overall fitting accuracy, from highest to lowest: red, orange, yellow, green, blue and purple. f Table showing run time of each algorithm for different amounts of sampling and 20 (12+8) iterations (Color figure online)

On the other hand, reducing the number of iterations from 40 (24+16) to 20 (12+8) has no negative impact in performance for levels of noise smaller than 2% but has a noticeable negative impact for levels of noise bigger than 5%. Notice that remarkable speed ups, Fig. 13f, can be obtain for both algorithms by combining the previous two strategies at the expenses of small but noticeable decreases in fitting accuracy.

Comparison on Helen and AFW

In order to facilitate comparisons with recent prior work on AAMs (Tzimiropoulos and Pantic 2013; Antonakos et al. 2014; Kossaifi et al. 2014) and with other state-of-the-art approaches in face alignment (Xiong and De la Torre 2013; Asthana et al. 2013), in this experiment, we report the fitting accuracy of the SSD and Project-Out Asymmetric Gauss-Newton algorithms on the widely used test set of the Helen database and on the entire AFW database. Furthermore we compare the performance of the previous two algorithms with the one obtained by the recently proposed Gauss-Newton Deformable Part Models (GN-DPMs) proposed by Tzimiropoulos and Pantic in Tzimiropoulos and Pantic (2014); which was shown to achieve state-of-the-art results in the problem of face alignment in-the-wild.

For both our algorithms, we report two different types of results: (i) sampling rate of 25% and 20 (12+8) iterations; and (ii) sampling rate of 50% and 40 (24+16) iterations. For GN-DPMs we use the authors public implementation to generate the results. In this case, we report, again, two different types of results by letting the algorithm run for 20 and 40 iterations.

Result for this experiment are shown in Fig. 14. Looking at Fig. 14a we can see that both, SSD and Project-Out Asymmetric Gauss-Newton algorithms, obtain similar fitting accuracy on the Helen test dataset. Note that, in all cases, their accuracy is comparable to the one achieved by GN-DPMs for normalized point-to-point errors <0.2 and significantly better for <0.3, <0.4. As expected, the best results for both our algorithms are obtained using 50% of the total amount of pixels and 40 (24+16) iterations. However, the results obtained by using only 25% of the total amount of pixels and 20 (12+8) iterations are comparable to the previous ones; specially for the Project-Out Asymmetric Gauss-Newton. In general, these results are consistent with the ones obtained on the LFPW test dataset, Experiments 5.1 and 5.3.

Fig. 14.

Fig. 14

Results showing the fitting accuracy of the SSD and Project-Out Asymmetric Gauss-Newton algorithms on the Helen and AFW databases. a CED on the Helen test dataset for the Project-Out and SSD Asymmetric Gauss-Newton algorithms initialized with 5% noise. b CED on the AFW database for the Project-Out and SSD Asymmetric Gauss-Newton algorithm initialized with 5% noise

On the other hand, the performance of both algorithms drops significantly on the AFW database, Fig. 14b. In this case, GN-DPMs achieves slightly better results than the SSD and Project-Out Asymmetric Gauss-Newton algorithms for normalized point-to-point errors <0.2 and slightly worst for <0.3, <0.4. Again, both our algorithms obtain better results by using 50% sampling rate and 40 (24+16) iterations and the difference in accuracy with respect to the versions using 25% sampling rate and 20 (12+8) iterations slightly widens when compared to the results obtained on the Helen test dataset. This drop in performance is consistent with other recent works on AAMs (Tzimiropoulos and Pantic 2014; Alabort-i-Medina and Zafeiriou 2014; Antonakos et al. 2014; Alabort-i-Medina and Zafeiriou 2015) and it is attributed to large difference in terms of shape and appearance statistics between the images of the AFW dataset and the ones of the training sets of the LFPW and Helen datasets where the AAM model was trained on.

Exemplar results for this experiment are shown in Figs. 15 and 16.

Fig. 15.

Fig. 15

Exemplar results from the Helen test dataset. a Exemplar results from the Helen test dataset obtained by the Project-Out Asymmetric Gauss-Newton Schur algorithm. b Exemplar results from the Helen test dataset obtained by the SSD Asymmetric Gauss-Newton Schur algorithm

Fig. 16.

Fig. 16

Exemplar results from the AFW dataset. a Exemplar results from the Helen test dataset obtained by the Project-Out Asymmetric Gauss-Newton Schur algorithm. b Exemplar results from the AFW dataset obtained by the SSD Asymmetric Gauss-Newton Schur algorithm

Analysis

Given the results reported by the previous six experiments we conclude that:

  1. Overall, Gauss-Newton and Wiberg algorithms vastly outperform Newton algorithms for fitting AAMs. Experiment 5.1 clearly shows that the former algorithms provide significantly higher levels of fitting accuracy at considerably lower computational complexities and run times. These findings are consistent with existent literature in the related field of parametric image alignment (Matthews and Baker 2004) and also, to certain extend, with prior work on Newton algorithms for AAM fitting (Kossaifi et al. 2014). We attribute the bad performance of Newton algorithms to the difficulty of accurately computing a (noiseless) estimate of the full Hessian matrix using finite differences.

  2. Gauss-Newton and Wiberg algorithms are virtually equivalent in performance. The results in Experiment 5.1 show that the difference in accuracy between both types of algorithms is minimal and the small differences in their respective solutions are, in practice, insignificant.

  3. Our Bayesian project-out formulation leads to significant improvements in fitting accuracy without adding extra computational cost. Experiment 5.2 shows that a weighted combination of the two terms forming Bayesian project-out loss always outperforms the classic project out formulation.

  4. The Asymmetric composition proposed in this work leads to CGD algorithms that are more accurate and that converge faster. In particular, the SSD and Project-Out Asymmetric Gauss-Newton algorithms are shown to achieve significantly better performance than their Forward and Inverse counterparts in Experiments 5.1 and 5.3.

  5. Finally, a significant reduction in the computational complexity and runtime of CDG algorithms can be obtained by limiting the number of pixels considered during optimization of the loss function and by adjusting the number of iterations that the algorithms are run for, Experiment 5.4.

Conclusion

In this paper we have thoroughly studied the problem of fitting AAMs using CGD algorithms. We have presented a unified and complete framework for these algorithms and classified them with respect to three of their main characteristics: (i) cost function; (ii) type of composition; and (iii) optimization method.

Furthermore, we have extended the previous framework by:

  • Proposing a novel Bayesian cost function for fitting AAMs that can be interpreted as a more general formulation of the well-known project-out loss. We have assumed a probabilistic model for appearance generation with both Gaussian noise and a Gaussian prior over a latent appearance space. Marginalizing out the latent appearance space, we have derived a novel cost function that only depends on shape parameters and that can be interpreted as a valid and more general probabilistic formulation of the well-known project-out cost function (Matthews and Baker 2004). In the experiments, we have showed that our Bayesian formulation considerably outperforms the original project-out cost function.

  • Proposing asymmetric and bidirectional compositions for CGD algorithms. We have shown the connection between Gauss-Newton Asymmetric algorithms and ESM algorithms and experimentally proved that these two novel types of composition lead to better convergent and more robust CGD algorithm for fitting AAMs.

  • Providing new valuable insights into existent CGD algorithms by reinterpreting them as direct applications of the Schur complement and the Wiberg method.

Finally, in terms of future work, we plan to:

  • Adapt existent Supervised Descent (SD) algorithms for face alignment (Xiong and De la Torre 2013; Tzimiropoulos 2015) to AAMs and investigate their relationship with the CGD algorithms studied in this paper.

  • Investigate if our Bayesian cost function and the proposed asymmetric and bidirectional compositions can also be successfully applied to similar generative parametric models, such as the Gauss-Newton Parts-Based Deformable Model (GN-DPM) proposed in Tzimiropoulos and Pantic (2014).

Acknowledgments

The work of Joan Alabort-i-Medina is funded by a DTA studentship from Imperial College London and by the Qualcomm Innovation Fellowship. The work of S. Zafeiriou has been partly funded by the EPSRC project Adaptive Facial Deformable Models for Tracking (ADAManT), EP/L026813/1.

Appendix 1: Asymmetric Gauss-Newton Algorithms as Efficient Second-order Minimization (ESM)

In this section, we show that the Asymmetric Gauss-Newton algorithms derived in Sect. 3.3.1 are, in fact, also true second order optimization algorithms with respect to the incremental warp Δp.

The use of asymmetric composition together with the Gauss-Newton method has been proven to naturally lead to Efficient Second order Minimization (ESM) algorithms in the related field of parametric image alignment (Malis 2004; Benhimane and Malis 2004; Mégret et al. 2008, 2010). Following a similar line of reasoning, we will show that Asymmetric Gauss-Newton algorithms for fitting AAMs can also be also interpreted as ESM algorithms.

In order to show the previous relationship we will make use of the simplified data term28 introduced by Eq. 25. Using forward composition, the optimization problem defined by:

Δp=argminΔp12rfTA¯rf 125

where the forward residual rf is defined as:

rf=i[pΔp]-a 126

As seen before, Gauss-Newton solves the previous optimization problem by performing a first order Taylor expansion of the residual around Δp:

r^f(Δp)=rf+rfΔpΔp+Orf(Δp2)remainder=i[p]-a¯+JiΔp+Orf(Δp2) 127

and solving the following approximation of the original problem:

Δp=argminΔp12r^fTr^f 128

However, note that, instead of performing a first order Taylor expansion, we can also perform a second order Taylor expansion of the residual:

rˇf(Δp)=rf+rfΔpΔp+12ΔpT2rf2ΔpΔp+Orf(Δp3)=i[p]-a+JiΔp+12ΔpTHiΔp+Orf(Δp3) 129

Then, given the second main assumption behind AAMs (Eq. 7) the following approximation must hold:

i[p]WΔpaWΔpJiJa 130

and, because the previous Ji and Ja are functions of Δp, we can perform a first order Taylor expansion of Ji to obtain:

Ji(Δp)Ji+ΔpTJiΔp+OJi(Δp2)remainderJi+ΔpTHi+OJi(Δp2)JaJi+ΔpTHi+OJi(Δp2)ΔpTHiJa-Ji-OJi(Δp2) 131

Finally, substituting the previous approximation for ΔpTHi into Eq. 129 we arrive at:

rˇf(Δp)=i[p]-a+JiΔp+12ΔpTHiΔp+Orf(Δp3)=i[p]-a+JiΔp+12Ja-Ji-OJi(Δp2)Δp+Orf(Δp3)=i[p]-a+12Ji+JaΔp+Ototal(Δp3) 132

where the total remainder is cubic with respect to Δp:

Ototal(Δp3)=Orf(Δp3)-OJi(Δp2)Δp 133

The expression in Eq. 132 constitutes a true second order approximation of the forward residual rf where the term 12Ji+Ja is equivalent to the asymmetric Jacobian in Eq. 47 when α=β=0.5:

12Ji+Ja=12Ji+12Ja=12i[p]WΔp+12aWΔp=12i[p]+12aWΔp=tWΔp=Jt 134

and, consequently, Asymmetric Gauss-Newton algorithms for fitting AAMs can be viewed as ESM algorithms that only require first order partial derivatives of the residual and that have the same computational complexity as first order algorithms.

Appendix 2: Terms in SSD Newton Hessians

In this section we define the individual terms of the Hessian matrices used by the SSD Asymmetric and Bidirectional Newton optimization algorithms derived in Sect. 3.3.2.

(a) Asymmetric

The individual terms forming the Hessian matrix of the SSD Asymmetric Newton algorithm defined by Eq. 83 are defined as follows:

2DaΔc2=-ATraΔc=-ATraΔc=ATAI 135
2DaΔcΔp=-ATraΔp=-ATΔpra-ATraΔp=-βJATra-ATJt 136

where we have defined JA=[a1,,am]TWΔp.

2DaΔp2=JtTraΔp=JtTΔpra+JtTraΔp=WΔpT2tWΔp+t2W2p00ra+JtTJt=WΔpT2tWΔpra+JtTJt 137

(b) Bidirectional

The individual terms forming the Hessian matrix of the SSD Bidirectional Newton algorithm defined by Eq. 86 are defined as follows:

2DbΔc2=-ATrbΔc=-ATrbΔc=ATAI 138
2DbΔcΔp=-ATrbΔp=-ATrbΔp=-ATJi 139
2DbΔcΔq=-ATrbΔq=-ATΔqrb-ATrbΔq=-JATrb+ATJa 140
2DbΔp2=JiTrbΔp=JiTΔprb+JiTrbΔp=WΔpT2i[p]WΔprb+JiTJi 141
2DbΔpΔq=JiTrbΔq=-JiTJa 142
2DbΔq2=-JaTrbΔq=-JaTΔqrb-JaTrbΔq=-WΔqT2(a+Ac)WΔqrb+JaTJa 143

Appendix 3: Iterative Solutions of All Algorithms

In this section we report the iterative solutions of all CGD algorithms studied in this paper. In order to keep the information structured algorithms are grouped by their cost function. Consequently, iterative solutions for all SSD and Project-Out algorithms are stated in Tables 1 and 2.

Table 1.

Iterative solutions of all SSD algorithms studied in this paper

SSD algorithms Iterative solutions
Δp Δq Δc
SSD_For_GN_Sch Amberg et al. (2009), Tzimiropoulos and Pantic (2013) Δp=-H^i-1JiTA¯r Δc=Ar+JiΔp
H^i=JiTA¯Ji
SSD_For_GN_Alt Δp=-Hi-1JiTr-AΔc Δc=Ar+JiΔp
Hi=JiTJi
SSD_For_N_Sch Δp=-H^iN-1JiTA¯r Δc=Ar+JiΔp
H^iN=WΔpT2iWΔpr+H^i
SSD_For_N_Alt Δp=-HiN-1JiTA¯r-AΔc Δc=Ar+JiΔp
HiN=WΔpT2iWΔpr+Hi
SSD_For_W Δp=-H^i-1JiTA¯r Δc=Ar
SSD_Inv_GN_Sch Papandreou and Maragos (2008), Tzimiropoulos and Pantic (2013) Δp=H^a-1JaTA¯r Δc=Ar-JaΔp
H^a=JaTA¯Ja
SSD_Inv_GN_Alt Tzimiropoulos et al. (2012), Antonakos et al. (2014) Δp=Ha-1JaTr-AΔc Δc=Ar-JaΔp
Ha=JaTJa
SSD_Inv_N_Sch Δp=H^aN-1JaTA¯r Δc=Ar-JaΔp
H^aN=WΔpT2aWΔpr+H^a
SSD_Inv_N_Alt Δp=HaN-1JaTA¯r-AΔc Δc=Ar-JaΔp
HaN=WΔpT2iWΔpr+Ha
SSD_Inv_W Δp=H^a-1JaTA¯r Δc=Ar
SSD_Asy_GN_Sch Δp=-H^t-1JtTA¯r Δc=Ar+JtΔp
H^t=JtTA¯Jt
SSD_Asy_GN_Alt Δp=-Ht-1JtTr-AΔc Δc=Ar+JtΔp
Ht=JtTJt
SSD_Asy_N_Sch Δp=-H^tN-1JtTA¯r Δc=Ar+JtΔp
H^tN=WΔpT2tWΔpr+H^t
SSD_Asy_N_Alt Δp=-HtN-1JtTA¯r-AΔc Δc=Ar+JtΔp
HtN=WΔpT2tWΔpr+Ht
SSD_Asy_W Δp=-H^t-1JtTA¯r Δc=Ar
SSD_Bid_GN_Sch Δp=-H^i-1JiTA¯r1 Δq=Hˇa-1JaTPr Δc=Ar2
r1=r-JaΔq Hˇa=JaTPJa r2=r+JiΔp-JaΔq
P=A¯-A¯JiH^i-1JiTA¯
SSD_Bid_GN_Alt Δp=-Hi-1JiTr3 Δq=Ha-1JaTr4 Δc=Ar2
r3=r-AΔc-JaΔq r4=r-AΔc+JiΔp
SSD_Bid_N_Sch Δp=-H^iN-1JiTA¯r1 Δq=HˇaN-1JaTPNr Δc=Ar2
HˇaN=WΔpT2tWΔpr+Hˇa
PN=A¯-A¯JiH^iN-1JiTA¯
SSD_Bid_N_Alt Δp=-HiN-1JiTr3 Δq=HaN-1JaTr4 Δc=Ar2
SSD_Bid_W Δp=-H^i-1JiTA¯r Δq=Hˇa-1JaTPr Δc=Ar

Table 2.

Iterative solutions of all Project-Out algorithms studied in this paper

Project-Out algorithms Iterative solutions
Δp Δq
PO_For_GN Amberg et al. (2009), Tzimiropoulos and Pantic (2013) Δp=-H^i-1JiTA¯r
H^i=JiTA¯Ji
PO_For_N Δp=-H^iN-1JiTA¯r
H^iN=WΔpT2iWΔpA¯r+H^i
PO_Inv_GN Matthews and Baker (2004) Δp=H^a-1Ja¯TA¯r
H^a¯=Ja¯TA¯Ja¯
PO_Inv_N Δp=H^a¯N-1Ja¯TA¯r
H^a¯N=WΔpT2a¯WΔpA¯r+H^a¯
PO_Asy_GN Δp=-H^t-1JtTA¯r
H^t=JtTA¯Jt
PO_Asy_N Δp=-H^tN-1JtTA¯r
H^tN=WΔpT2tWΔpA¯r+H^t
PO_Bid_GN_Sch Δp=-H^i-1JiTA¯r-Ja¯Δq Δq=Hˇa¯-1JiTPr
Hˇa¯=Ja¯TPJa¯
P=A¯-A¯JiH^i-1JiTA¯
PO_Bid_GN_Alt Δp=-H^i-1JiTA¯r-Ja¯Δq Δq=H^a¯-1Ja¯TA¯r+JiΔp
PO_Bid_N_Sch Δp=-H^iN-1JiTA¯r-Ja¯Δq Δq=Hˇa¯N-1Ja¯TPNr
Hˇa¯N=WΔpT2a¯WΔpA¯r+Hˇa¯
PN=A¯-A¯JiH^iN-1JiTA¯
PO_Bid_N_Alt Δp=-H^iN-1JiTA¯r-Ja¯Δq Δq=H^a¯N-1Ja¯TA¯r+JiΔp
PO_Bid_W Δp=-H^i-1JiTA¯r Δq=Hˇa¯-1Ja¯TPr

Appendix 4: Additional Experiment: Comparison on MIT StreetScene Dataset

In order to showcase the broader applicability of AAMs, we complete the main experimental section by performing an additional experiment on the problem of non-rigid car alignment in-the-wild. To this end, we report the fitting accuracy of the best performing CGD algorithms on the MIT StreetScene29 database.

We use the first view of the MIT StreetScene30 dataset containing a wide variety of frontal car images obtained in the wild. We use 10-fold cross-validation on the 500 images of the previous dataset to train and test our algorithms. We report results for the two versions of the SSD Asymmetric Gauss-Newton and the Project-Out Asymmetric Gauss-Newton algorithms used in Experiment 5.5.

Result for this experiment are shown in Fig. 17. We can observe that all algorithms obtain similar performance and that they vastly improve upon the original initialization. Exemplar results for this experiment are shown in Fig. 18.

Fig. 17.

Fig. 17

CED on the first view of the MIT StreetScene test dataset for the Project-Out and SSD Asymmetric Gauss-Newton algorithms initialized with 5% noise

Fig. 18.

Fig. 18

Exemplar results from the MIT StreetScene test dataset. a Exemplar results from the MIT StreetScene test dataset obtained by the Project-Out Asymmetric Gauss-Newton Schur algorithm. b Exemplar results from the MIT StreetScene test dataset obtained by the SSD Asymmetric Gauss-Newton Schur algorithm

Footnotes

1

A preliminary version of this work (Alabort-i-Medina and Zafeiriou 2014) was presented at CVPR 2014.

2

This formulation is generic and one could assume other probabilistic generative models (van der Maaten and Hendriks 2010; Bach and Jordan 2005; Prince et al. 2012; Nicolaou et al. 2014) to define novel probabilistic versions of AAMs.

3

Theoretically, the optimal value for ς2 and σ2 is the average value of the eigenvalues associated to the discarded shape and appearance eigenvectors respectively i.e. ς2=1N-ni=nNλsi and σ2=1M-mi=mMλai (Moghaddam and Pentland 1997)

4

This choice of D is naturally given by second main assumption behind AAMs, Eq. 7 and by the linear generative model of appearance defined by Eq. 9.

5

The residual r in Eq. 11 is linear with respect to the appearance parameters c and non-linear with respect to the shape parameters p through the warp W(x;p)

6

This is a common assumption in CGD algorithms (Matthews and Baker 2004), however, in reality, some degree of dependence between these parameters is to be expected (Cootes et al. 2001).

7

The use of PCA ensures the orthonormality of the appearance bases and, consequently, ATA=I (where I denotes the identity matrix). Similarly, the use of PCA also ensures orthogonality between the appearance mean and the appearance bases and, hence, ATa¯=0.

8
Using the Woodbury formula:
(AΣAT+σ2I)-1=1σ2I-1σ4A(Σ-1+1σ2I)-1reapply WoodburyAT=1σ2I-1σ4A(σ2I-σ4(Σ+σ2I)-1)AT=1σ2I-1σ4A(σ2I-σ4D-1)AT=AD-1AT+1σ2(I-AAT)
9

Further details regarding composition, pΔp, and inversion, Δq-1, of typical AAMs’ motion models such as PWA and TPS warps can be found in Matthews and Baker (2004), Papandreou and Maragos (2008).

10

Note that, Papandreou and Maragos derived the inverse compositional to forward additive parameter update Jacobian matrix Jq, however, it is straightforward to modify their original formulation to obtain Jp. Further details regarding the computation of the previous parameter update Jacobian matrices can be found in Papandreou and Maragos (2008), its appendix: http://www.stat.ucla.edu/~gpapan/pubs/confr/PapandreouMaragos_AAM_supmat-cvpr08.pdf and posterior correction http://www.stat.ucla.edu/~gpapan/pubs/confr/PapandreouMaragos_AAM_typo-cvpr08.pdf

11

Amberg et al. proposed the use of the Steepest Descent method (Boyd and Vandenberghe 2004) in Amberg et al. (2009). However, their approach requires a special formulation of the motion model and it performs poorly using the standard independent AAM formulation (Matthews and Baker 2004) used in this work.

12

Wiberg reduces to Gauss-Newton when only a single set of parameters needs to be inferred.

13

These represent the most general cases because the derivations for forward, inverse and symmetric compositions can be directly obtained from the asymmetric one and they require solving for both shape and appearance parameters.

14

The derivation of regularized solutions with respect to the appearance parameters Δc is straightforward and, hence, omitted throughout this section.

15

The value of the current estimate of appearance parameters is updated at each iteration using the following additive update rule: cc+Δc

16

m and n denote the number of shape and appearance parameters respectively while F denotes the number of pixels on the reference frame.

17
Applying the Schur complement to the following system of equations:
Ax+By=aCx+Dy=b
the solution for x is given by:
(A-BD-1C)x=a-BD-1b
and the solution for y is obtained by substituting the value of x into the original system.
18

This is an important reduction in complexity because usually m>>n in CGD algorithms.

19

In practice, the solutions for the Project-Out cost function can be computed slightly faster than those for the SSD because they do not need to explicitly solve for Δc. This is specially important in the inverse compositional case because expressions of the form (JTUJ)-1JTU can be completely precomputed and the computational cost per iteration reduces to O(nF).

20

In (2014), Kossaifi et al. applied the Schur complement to the Newton method using only inverse composition while we apply it here using the more general asymmetric (which includes forward, inverse and symmetric) and bidirectional compositions.

21

In practice, the solutions for the project-out cost function can also be computed slightly faster because they do not need to explicitly solve for Δc. However, in this case, using inverse composition we can only precompute terms of the form JTU and JTUJ but not the entire H-1JTU because of the explicit dependence between H and the current residual r.

22
A¯ is idempotent:
A¯A¯=I-AATI-AAT=ITI-2AAT+AATAIAT=I-2AAT+AAT=I-AAT=A¯
23

Notice that, in Amberg et al. (2009), Amberg et al. also introduced a hybrid forward/inverse algorithm, coined CoLiNe. This algorithm is a compromise between the previous two algorithms in terms of both complexity and accuracy. Due to its rather ad-hoc derivation, this algorithm was not considered in this paper.

24

Note that Martins et al. used an additive update rule for the shape parameters, p=p+Δp, so strictly speaking they derived an additive version of the algorithm i.e the Simultaneous Forward Additive (SFA) algorithm.

26

The face size is computed as the mean of the height and width of the bounding box containing a face.

27

These figures are produced by dividing the value of the cost function at each iteration by its initial value and averaging for all images.

28

Notice that similar derivations can also be obtained using the SSD and Project-Out data terms, but we use the simplified one here for clarity.

Contributor Information

Joan Alabort-i-Medina, Email: ja310@imperial.ac.uk.

Stefanos Zafeiriou, Email: s.zafeiriou@imperial.ac.uk.

References

  1. Alabort-i-Medina, J., & Zafeiriou, S. (2014). Bayesian active appearance models. In IEEE Conference on computer vision and pattern recognition (CVPR).
  2. Alabort-i-Medina, J., & Zafeiriou, S. (2015). Unifying holistic and parts-based deformable model fitting. In IEEE conference on computer vision and pattern recognition (CVPR).
  3. Alabort-i-Medina, J., Antonakos, E., Booth, J., Snape, P., & Zafeiriou, S. (2014). Menpo: A comprehensive platform for parametric image alignment and visual deformable models. In ACM international conference on multimedia (ACMM).
  4. Amberg, B., Blake, A., & Vetter, T. (2009). On compositional image alignment, with an application to active appearance models. In IEEE conference on computer vision and pattern recognition (CVPR).
  5. Antonakos, E., Alabort-i-Medina, J., Tzimiropoulos, G., & Zafeiriou, S. (2014). Feature-based lucas-kanade and active appearance models. IEEE Transactions on Image Processing (TIP). [DOI] [PubMed]
  6. Asthana, A., Zafeiriou, S., Cheng, S., & Pantic, M. (2013). Robust discriminative response map fitting with constrained local models. In IEEE conference on computer vision and pattern recognition (CVPR).
  7. Authesserre JB, Berthoumieu Y. Bidirectional composition on lie groups for gradient-based image alignment. IEEE Transactions on Image Processing (TIP) 2010;19:2369–2381. doi: 10.1109/TIP.2010.2048406. [DOI] [PubMed] [Google Scholar]
  8. Autheserre, J. B., Mégret, R., & Berthoumieu, Y. (2009). Asymmetric gradient-based image alignment. In IEEE international conference on acoustics, speech and signal processing (ICASSP).
  9. Bach, F., & Jordan, M. (2005). A probabilistic interpretation ofcanonical correlation analysis. Technical report, Department of Statistics. Berkeley: University of California
  10. Baker S, Matthews I. Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision (IJCV) 2004;56:221–255. doi: 10.1023/B:VISI.0000011205.11775.fd. [DOI] [Google Scholar]
  11. Batur A, Hayes M. Adaptive active appearance models. IEEE Transactions on Image Processing (TIP) 2005;14:1707–1721. doi: 10.1109/TIP.2005.854473. [DOI] [PubMed] [Google Scholar]
  12. Belhumeur, P. N., Jacobs, D. W., Kriegman, D. J., & Kumar, N. (2011). Localizing parts of faces using a consensus of exemplars. In Conference on computer vision and pattern recognition (CVPR). [DOI] [PubMed]
  13. Benhimane, S., & Malis, E. (2004). Real-time image-based tracking of planes using efficient second-order minimization. In IEEE international conference on intelligent robots and systems (IROS).
  14. Boyd S, Vandenberghe L. Convex optimization. Cambridge: Cambridge University Press; 2004. [Google Scholar]
  15. Bradski, G. (2000). The opencv library. Dr Dobb’s Journal of Software Tools.
  16. Cootes TF, Edwards GJ. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2001;6:681–685. doi: 10.1109/34.927467. [DOI] [Google Scholar]
  17. Cootes, T. F., & Taylor, C. J. (2001). On representing edge structure for model matching. In IEEE conference on computer vision and pattern recognition (CVPR).
  18. Cootes, T. F., & Taylor, C. J. (2004). Statistical models of appearance for computer vision. Technical report, Imaging Science and Biomedical Engineering, University of Manchester.
  19. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE conference on computer vision and pattern recognition (CVPR).
  20. De la Torre F. A least-squares framework for component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2012;34:1041–1055. doi: 10.1109/TPAMI.2011.184. [DOI] [PubMed] [Google Scholar]
  21. Donner R, Reiter M, Langs G, Peloschek P, Bischof H. Fast active appearance model search using canonical correlation analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2006;10:1690–1694. doi: 10.1109/TPAMI.2006.206. [DOI] [PubMed] [Google Scholar]
  22. Gross R, Matthews I, Baker S. Generic vs. person specific active appearance models. Image and Vision Computing. 2005;23:1080–1093. doi: 10.1016/j.imavis.2005.07.009. [DOI] [Google Scholar]
  23. Hou, X., Li, S.Z., Zhang, H., & Cheng, Q. (2001). Direct appearance models. In IEEE conference on computer vision and pattern recognition (CVPR).
  24. Kossaifi, J., Tzimiropoulos, G., & Pantic, M. (2014). Fast newton active appearance models. In IEEE international conference on image processing (ICIP). [DOI] [PubMed]
  25. Le, V., Jonathan, B., Lin, Z., Boudev, L., & Huang, T.S. (2012). Interactive facial feature localization. In European conference on computer vision (ECCV).
  26. Liu X. Discriminative face alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2009;31:1941–1954. doi: 10.1109/TPAMI.2008.238. [DOI] [PubMed] [Google Scholar]
  27. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In IEEE international conference on computer vision (ICCV).
  28. Lucey S, Navarathna R, Ashraf AB, Sridharan S. Fourier lucas-kanade algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2013;35:1383–1396. doi: 10.1109/TPAMI.2012.220. [DOI] [PubMed] [Google Scholar]
  29. Malis, E. (2004). Improving vision-based control using efficient second-order minimization techniques. In International conference on robotics and automation (ICRA).
  30. Martins, P., Batista, J., & Caseiro, R. (2010). Face alignment through 2.5d active appearance models. In British machine vision conference (BMVC).
  31. Matthews I, Baker S. Active appearance models revisited. International Journal of Computer Vision (IJCV) 2004;60:135–164. doi: 10.1023/B:VISI.0000029666.37597.d3. [DOI] [Google Scholar]
  32. Mégret, R., Authesserre, J.B., & Berthoumieu, Y. (2008). The bi-directional framework for unifying parametric image alignment approaches. In European conference on computer vision (ECCV).
  33. Moghaddam B, Pentland A. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 1997;19:696–710. doi: 10.1109/34.598227. [DOI] [Google Scholar]
  34. Muñoz E, Márquez-Neila P, Baumela L. Rationalizing efficient compositional image alignment. International Journal of Computer Vision (IJCV) 2014;112:354–372. doi: 10.1007/s11263-014-0769-6. [DOI] [Google Scholar]
  35. Nicolaou, M. A., Zafeiriou, S., & Pantic, P. (2014). A unified framework for probabilistic component analysis. In Machine learning and knowledge discovery in databases (ECML PKDD).
  36. Okatani T, Deguchi K. On the wiberg algorithm for matrix factorization in the presence of missing components. International Journal of Computer Vision (IJCV) 2006;72:329–337. doi: 10.1007/s11263-006-9785-5. [DOI] [Google Scholar]
  37. Papandreou, G., & Maragos, P. (2008). Adaptive and constrained algorithms for inverse compositional active appearance model fitting. In IEEE conference on computer vision and pattern recognition (CVPR).
  38. Prince S, Li P, Fu Y, Mohammed U, Elder JH. Probabilistic models for inference about identity. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2012;34:144–157. doi: 10.1109/TPAMI.2011.104. [DOI] [PubMed] [Google Scholar]
  39. Roweis S. Em algorithms for pca and spca. Advances in Neural Information Processing Systems (NIPS) 1998;10:626–632. [Google Scholar]
  40. Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., & Pantic, M. (2013a). 300 faces in-the-wild challenge: The first facial landmark localization challenge. In IEEE international conference on computer vision workshop (ICCV-W) (pp. 397–403).
  41. Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., & Pantic, M. (2013b). A semi-automatic methodology for facial landmark annotation. In IEEE conference on computer vision and pattern recognition workshops (CVPRW) (pp. 896–903).
  42. Saragih J, Göcke R. Learning aam fitting through simulation. Pattern Recognition. 2009;42:2628–2636. doi: 10.1016/j.patcog.2009.04.014. [DOI] [Google Scholar]
  43. Sauer, P., Cootes, T., & Taylor, C. (2011). Accurate regression procedures for active appearance models. In British machine vision conference (BMVC).
  44. Strelow, D. (2012). General and nested wiberg minimization: L2 and maximum likelihood. In European conference on computer vision (ECCV).
  45. Tipping ME, Bishop CM. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 1999;61:611–622. doi: 10.1111/1467-9868.00196. [DOI] [Google Scholar]
  46. Tresadern, P.A., Sauer, P., & Cootes, T.F. (2010). Additive update predictors in active appearance models. In British machine vision conference (BMVC).
  47. Tzimiropoulos, G. (2015). Project-out cascaded regression with an application to face alignment. In IEEE conference on computer vision and pattern recognition (CVPR).
  48. Tzimiropoulos, G., & Pantic, M. (2013). Optimization problems for fast aam fitting in-the-wild. In IEEE international conference on computer vision (ICCV).
  49. Tzimiropoulos, G., & Pantic, M. (2014). Gauss-newton deformable part models for face alignment in-the-wild. In IEEE conference on computer vision and pattern recognition (CVPR).
  50. Tzimiropoulos, G., Alabort-i-Medina, J., Zafeiriou, S., & Pantic, M. (2012). Generic active appearance models revisited. In IEEE Asian conference on computer vision (ACCV).
  51. van der Maaten, L., & Hendriks, E. (2010). Capturing appearance variation in active appearance models. In IEEE conference on computer vision and pattern recognition workshop (CVPR-W).
  52. Vedaldi, A., & Fulkerson, B. (2010). VLFeat: An open and portable library of computer vision algorithms.
  53. Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In IEEE conference on computer vision and pattern recognition (CVPR).
  54. Woodbury MA. Inverting modified matrices. Princeton: Princeton University; 1950. [Google Scholar]
  55. Xiong, X., & De la Torre, F. (2013). Supervised descent method and its applications to face alignment. In IEEE conference on computer vision and pattern recognition (CVPR).
  56. Zhu, X., & Ramanan, D. (2012). Face detection, pose estimation, and landmark localization in the wild. In Conference on computer vision and pattern recognition (CVPR).

Articles from International Journal of Computer Vision are provided here courtesy of Springer

RESOURCES