Relationship between SVD and PCA. How to use SVD to perform PCA?

  • Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA?

    Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?

    I wrote this FAQ-style question together with my own answer, because it is frequently being asked in various forms, but there is no canonical thread and so closing duplicates is difficult. Please provide meta comments in this accompanying meta thread.

    In addition to an excellent and detailed amoeba's answer with its further links I might recommend to check this, where PCA is considered side by side some other SVD-based techniques. The discussion there presents algebra almost identical to amoeba's with just minor difference that the speech there, in describing PCA, goes about svd decomposition of $\mathbf X/\sqrt{n}$ [or $\mathbf X/\sqrt{n-1}$] instead of $\bf X$ - which is simply convenient as it relates to the PCA done via the eigendecomposition of the covariance matrix.

    PCA is a special case of SVD. PCA needs the data normalized, ideally same unit. The matrix is nxn in PCA.

    @OrvarKorvar: What n x n matrix are you talking about ?

    @Cbhihe the `n x n` matrix is the covariance matrix of the data matrix `X`

  • amoeba

    amoeba Correct answer

    6 years ago

    Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have been subtracted and are now equal to zero.

    Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$.

    If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix and $\mathbf S$ is the diagonal matrix of singular values $s_i$. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$.

    To summarize:

    1. If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then columns of $\mathbf V$ are principal directions/axes.
    2. Columns of $\mathbf {US}$ are principal components ("scores").
    3. Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Eigenvalues $\lambda_i$ show variances of the respective PCs.
    4. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. here and here for why "loadings" should not be confused with principal directions.
    5. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$.
    6. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations.
    7. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations.
    8. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs.
    9. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here.
    10. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$.

    Further links

    Rotating PCA animation

    +1 for both Q&A. Thanks for sharing. I have one question: why do you have to assume that the data matrix is centered initially?

    after reading your summary point #5, I restate my question as: why is the covariance matrix equal to this quantity only if the data matrix is centered initially?

    @Antoine, covariance matrix is by definition equal to $\langle (\mathbf x_i - \bar{\mathbf x})(\mathbf x_i - \bar{\mathbf x})^\top \rangle$, where angle brackets denote average value. If all $\mathbf x_i$ are stacked as rows in one matrix $\mathbf X$, then this expression is equal to $(\mathbf X - \bar{\mathbf X})(\mathbf X - \bar{\mathbf X})^\top/(n-1)$. If $\mathbf X$ is centered then it simplifies to $\mathbf X \mathbf X^\top/(n-1)$. Think of variance; it's equal to $\langle (x_i-\bar x)^2 \rangle$. But if $\bar x=0$ (i.e. data are centered), then it's simply the average value of $x_i^2$.

    @amoeba Thanks a lot for your prompt and helpful answer. Regarding your summary point #4: Could you please elaborate on where the $\sqrt{n-1} U$ comes from when we standardize the scores? To me, the PCs, or scores, are defined as either the columns of $XV_{(n,p)}$ (PCA approach) or the columns of $US_{(n,p)}$ (SVD approach), that's it.

    @Antoine, yes, PC scores are given by the columns of $XV=US$. But if you want to standardize them, you want each column to have variance 1. In $US$, matrix $S$ is diagonal and so simply scales the columns of $U$ by various factors. To standardize, we don't care about scaling, so we can look at $U$ alone. But columns of $U$ have unit norm (because $U$ is an orthogonal matrix). We don't want unit norm, we want unit variance, and variance is given by the sum of squares divided by $(n-1)$. So the variance of columns of $U$ is $1/(n-1)$. To make it equal to 1, take $\sqrt{n-1}U$ instead.

    @amoeba for those less familiar with linear algebra and matrix operations, it might be nice to mention that $(A.B.C)^{T}=C^{T}.B^{T}.A^{T}$ and that $U^{T}.U=Id$ because $U$ is orthogonal

    Hi @amoeba, Great answer here. Suer helpful. Question on your point 9 - what are the applications / uses / purposes of these lower rank reconstructed data? Thanks.

    @WillZhang: Thanks, I am glad it's helpful. What are the applications/uses of low-rank data reconstruction? For example, denoising. If your data are pictures, you can try to de-noise them by discarding low-variance PCs. You can search this website for `PCA reconstruct` and you will see that this comes up a lot.

    @amoeba Makes perfect sense, thank you for explaining.

    @amoeba where can I find book to learn SVD, PCA and other linear algebra-statistics algorithms with detailed and simple explanations? Is there any workbooks or courses with exercises?

    @amoeba, I really love the way you explain things. Can you show me few lines of proof on why mapping data into the space of eigen vectors can maximize the variance? or give me a related link. Thanks!

    @hxd1011: In fact, I have posted an answer to a very similar question just today, please see here: http://stats.stackexchange.com/questions/217995.

    Thanks so much for this post. Other than "left singular vectors", what is the conceptual term / meaning of $U$? I see from point 4 that columns of $\sqrt{n-1}U$ are the standardized scores, and columns of $US$ are the (not standarized) scores (point 2). So then is it correct to say that columns of $U$ are the samples in the new coordinate system, just as the scores are the samples in the new coordinate system? And then would we say that $U$ vs. $US$ is simply one of scaling the coordinate system?

    Yes, I think that's correct, @BryanHanson. One can say that columns of $U$ are scores that are normalized to have sum of squares equal to one.

    @amoeba why is it $n-1$ in denomintor? should it be just $n$?

    @amoeba yes, but why use it? Also, is it possible to use the same denominator for $S$? The problem is that I see formulas where $\lambda_i = s_i^2$ and try to understand, how to use them?

    @amoeba if in X matrix the rows are the features and the columns are the samples, then the covariance is X Xtrans, right ? and in this case, the mean centering is done across the rows.

    @sera Yes, that's correct.

    @amoeba thank you. one more question. If X matrix the rows are the features and the columns are the samples, then could you tell me how I can perform PCA usign SVD. Based on the PCA, the Xtransformed would be np.dot(Vtransposed, X) and then how can I connect this to the SVD decomposition ?

    @sera Just transpose your matrix and get rid of your problem. You will only be getting confused otherwise.

    @amoeba thank you. eventually I managed to find the relathionship also in this case which is Vtransposed_pca X = Sigma Vtransposed_svd and the pca eigenvectors V are in the U matrix in this case.

    thanks for the answer. I've browsed many websites and your answer is the clearest one. In prcomp() function, it produces a result called rotation. Is this rotation the eigenvector, $V$?

    Greate answer. Thanks. Regarding summary point #10, I think the last $n-p$ columns are not generally zero in SVD. Are you talking about a special case?

    What a fantastic answer, thanks a lot!

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM