http://www.tulane.edu/~PsycStat/dunlap/Psyc613/RI2.html https://groups.google.com/group/sci.math.num-analysis/msg/9d09221b632d77a3?hl=de&pli=1 The Moore-Penrose generalized inverse can be calculated using a NR subroutine that computes the Singular Value Decomposition (SVD) of a matrix. Here's how... Suppose that the real matrix A has m rows and n columns (m >= n). Let w_1,...,w_n be the singular values of A, and let W = diag(w_1,...,w_n) be a diagonal n x n matrix. Suppose that the SVD of A is A = U * W * V' where U is an m x n column-orthogonal matrix, V is an n x n orthogonal matrix, and ' denotes transpose. You can use the SVDCMP subroutine from NR (see chapter 2, section 9) -- or a comparable routine from LINPACK or some other library -- to compute the matricies U, V, and W in the SVD of A. How do we use the results of the SVD computation to compute the Moore- Penrose generalized inverse of A? Before we can attempt to compute the generalized inverse of A, we must adjust W to take into account the precision of the machine on which the computations were performed. Let epsilon denote the machine precision, and w_max denote the largest SV of A. Any SV of A which is less is less than m * epsilon * w_max should probably (i.e., this is only a rule of thumb) be set to zero; otherwise, SV's which are "too small" (and may have been by-products of roundoff error) will overwhelm the more accurately computed SV's in the computation of the generalized inverse. Let W_0 denote the matrix obtained from W by making these adjustments. Let W_0^+ denote the matrix obtained from W_0 by replacing each *non-zero* diagonal element with its reciprocal. Then the Moore-Penrose generalized inverse of A is the n x m matrix A^+ = V * W_0^+ * U' If, in the precision of the machine, A has full rank (i.e., if all the diagonal entries of W_0 are non-zero), then A^+ is in fact a left-inverse for A. NOTES: (1) I do not claim that this is the *preferred* method for computing the Moore-Penrose inverse, but offer this is a way of computing it using routines provided in Numerical Recipes. (2) The Moore-Penrose inverse is used to compute the optimal solution to the linear system A*x = b when m >= n; i.e., to compute the solution with minimal Euclidean length if a solution exists, or to compute the solution in the least-squares sense if no ordinary solution exists. (3) Due to numerical considerations, I believe that it is generally better to compute the solution of A*x=b directly from x = (V * (W_0^+ * (U'*b))) rather than to compute the Moore-Penrose inverse separately and then calculate x = (A^+)*b. (4) Three out of four books surveyed (by me) gave a slightly different form of the SVD than the one described in "Numerical Recipes". The majority of books define the SVD of an m x n matrix A to be A = P D Q' where P is an m x m orthogonal matrix (i.e., square, and not merely column- orthogonal), D is an m x n (i.e., *not* square) diagonal matrix whose diagonal elements are the singular values of A, and Q is an n x n othogonal matrix. The second form seems to be much more convenient for theoretical calculations (e.g., symbolic manipulations with equations involving matricies), but adds NO NEW INFORMATION about the matrix A. Indeed, Q is identical to V, D is obtained from W by padding W with m - n rows of zeros, and P is obtained from U by augmenting the columns of U via the Gram-Schmidt process to form an orthonormal basis of m-dimensional space. None of these changes in any way involve the matrix A. Hence, the "Numerical Recipes" form of the SVD is better suited for numerical computation since it does not waste space storing or time computing the "superfluous" parts of P and D.