Skip to main content
. Author manuscript; available in PMC: 2014 May 1.
Published in final edited form as: Comput Stat Data Anal. 2012 Nov 20;61:83–98. doi: 10.1016/j.csda.2012.11.007

The pseudocode for the L1-PCA* algorithm is given next.

Algorithm L1-PCA*
Given a data matrix XRn×m with full column rank.
1: Set Xm = X; set Vm+1 = I; set (Ij*)m+1 = I. /* Initialization. */
2: for (k = m; k > 1; k = k − 1) do
3: Set j=jargminRj(Xk) and βk = β* using the Algorithm for finding a L1-norm best-fit
 subspace of dimension m − 1.
/* Find the best-fitting L1
subspace among subspaces derived with each variable j as the dependent variable. */
4: Set Zk = (Xk)((Ij*)k)T. /* Project points into a (k − 1)-dimensional subspace. */
5: Calculate the SVD of Zk, Zk = UΛVT , and set Vk to be equal to the (k − 1) columns of V corresponding to the largest values in the diagonal matrix Λ. /* Find a basis for the (k − 1)-dimensional subspace. */
6: Set αk=(=m+1k+1V)βkβk2 /* Calculate the kth principal component. */
7: Set X(k−1) = ZkVk. /* Calculate the projected points in terms of the new basis. */
8: end for
9: Set α1==m+12V. /* Calculate the first principal component. */

Notes on L1-PCA*