Skip to main content
. 2022 Apr 2;8(4):97. doi: 10.3390/jimaging8040097
Algorithm 1: PCA steps
1: Ignore the dataset (consists of d-dimensional sample) class labels.
2: Calculate the d-dimensional mean vectors: the mean for every dimension of the whole dataset. The mean vector is computed by the following equation:
m=1/nk=1nxk (1)
3: Calculate the scatter matrix or the covariance matrix of the dataset. The mean vector is computed by the following equation:
S=k=1n(xkm) (xkm)T (2)
4: Calculate the eigenvectors and corresponding eigenvalues of the covariance matrix.
5: Sort the eigenvalues by decreasing eigenvalues and pick k eigenvectors with the largest eigenvalues to form a d × k dimensional matrix W of eigenvectors.
6: Use the W eigenvector matrix to transform the sample (original matrix) into the new subspace via the equation:
y=WTx (3)
where x is a d × 1-dimensional vector representing one sample and y is the transformed k × 1-dimensional sample in the new subspace.