statsmodels.multivariate.pca.PCA¶
- class statsmodels.multivariate.pca.PCA(data, ncomp=None, standardize=True, demean=True, normalize=True, gls=False, weights=None, method='svd', missing=None, tol=5e-08, max_iter=1000, tol_em=5e-08, max_em_iter=100, svd_full_matrices=False)[source]¶
Principal Component Analysis
- Parameters:¶
- dataarray_like
Variables in columns, observations in rows.
- ncomp
int,optional Number of components to return. If None, returns the as many as the smaller of the number of rows or columns in data.
- standardizebool,
optional Flag indicating to use standardized data with mean 0 and unit variance. standardized being True implies demean. Using standardized data is equivalent to computing principal components from the correlation matrix of data.
- demeanbool,
optional Flag indicating whether to demean data before computing principal components. demean is ignored if standardize is True. Demeaning data but not standardizing is equivalent to computing principal components from the covariance matrix of data.
- normalizebool ,
optional Indicates whether to normalize the factors to have unit inner product. If False, the loadings will have unit inner product.
- glsbool,
optional Flag indicating to implement a two-step GLS estimator where in the first step principal components are used to estimate residuals, and then the inverse residual variance is used as a set of weights to estimate the final principal components. Setting gls to True requires ncomp to be less then the min of the number of rows or columns.
- weights
ndarray,optional Series weights to use after transforming data according to standardize or demean when computing the principal components.
- method
str,optional Sets the linear algebra routine used to compute eigenvectors:
‘svd’ uses a singular value decomposition (default).
‘eig’ uses an eigenvalue decomposition of a quadratic form
‘nipals’ uses the NIPALS algorithm and can be faster than SVD when ncomp is small and nvars is large. See notes about additional changes when using NIPALS.
- missing{
str,None} Method for missing data. Choices are:
‘drop-row’ - drop rows with missing values.
‘drop-col’ - drop columns with missing values.
‘drop-min’ - drop either rows or columns, choosing by data retention.
‘fill-em’ - use EM algorithm to fill missing value. ncomp should be set to the number of factors required.
None raises if data contains NaN values.
- tol
float,optional Tolerance to use when checking for convergence when using NIPALS.
- max_iter
int,optional Maximum iterations when using NIPALS.
- tol_em
float Tolerance to use when checking for convergence of the EM algorithm.
- max_em_iter
int Maximum iterations for the EM algorithm.
- svd_full_matricesbool,
optional If the ‘svd’ method is selected, this flag is used to set the parameter ‘full_matrices’ in the singular value decomposition method. Is set to False by default.
Notes
The default options perform principal component analysis on the demeaned, unit variance version of data. Setting standardize to False will instead only demean, and setting both standardized and demean to False will not alter the data.
Once the data have been transformed, the following relationships hold when the number of components (ncomp) is the same as tne minimum of the number of observation or the number of variables.
where X is the data, F is the array of principal components (factors or scores), and V is the array of eigenvectors (loadings) and V’ is the array of factor coefficients (coeff).
When weights are provided, the principal components are computed from the modified data
where \(\Omega\) is a diagonal matrix composed of the weights. For example, when using the GLS version of PCA, the elements of \(\Omega\) will be the inverse of the variances of the residuals from
where the number of factors is less than the rank of X
References
Examples
Basic PCA using the correlation matrix of the data
>>> import numpy as np >>> from statsmodels.multivariate.pca import PCA >>> x = np.random.randn(100)[:, None] >>> x = x + np.random.randn(100, 100) >>> pc = PCA(x)Note that the principal components are computed using a SVD and so the correlation matrix is never constructed, unless method=’eig’.
PCA using the covariance matrix of the data
>>> pc = PCA(x, standardize=False)Limiting the number of factors returned to 1 computed using NIPALS
>>> pc = PCA(x, ncomp=1, method='nipals') >>> pc.factors.shape (100, 1)- Attributes:¶
- factors
arrayorDataFrame nobs by ncomp array of principal components (scores)
- scores
arrayorDataFrame nobs by ncomp array of principal components - identical to factors
- loadings
arrayorDataFrame ncomp by nvar array of principal component loadings for constructing the factors
- coeff
arrayorDataFrame nvar by ncomp array of principal component loadings for constructing the projections
- projection
arrayorDataFrame nobs by var array containing the projection of the data onto the ncomp estimated factors
- rsquare
arrayorSeries ncomp array where the element in the ith position is the R-square of including the fist i principal components. Note: values are calculated on the transformed data, not the original data
- ic
arrayorDataFrame ncomp by 3 array containing the Bai and Ng (2003) Information criteria. Each column is a different criteria, and each row represents the number of included factors.
- eigenvals
arrayorSeries nvar array of eigenvalues
- eigenvecs
arrayorDataFrame nvar by nvar array of eigenvectors
- weights
ndarray nvar array of weights used to compute the principal components, normalized to unit length
- transformed_data
ndarray Standardized, demeaned and weighted data used to compute principal components and related quantities
- cols
ndarray Array of indices indicating columns used in the PCA
- rows
ndarray Array of indices indicating rows used in the PCA
- factors
Methods
plot_rsquare([ncomp, ax])Box plots of the individual series R-square against the number of PCs.
plot_scree([ncomp, log_scale, cumulative, ax])Plot of the ordered eigenvalues
project([ncomp, transform, unweight])Project series onto a specific number of factors.