The author explores black-box deflation methods for k-PCA algorithms, providing sharper bounds on approximation parameters.
Valid inference on principal subspace and covariance matrix under heteroskedastic noise with missing data.
Constructing confidence regions for PCA in high dimensions with missing data and heteroskedastic noise.
The paper proposes a unified neural model, called σ-PCA, that can learn both linear and nonlinear PCA as single-layer autoencoders. The model allows nonlinear PCA to learn the first rotation that reduces dimensionality and orders by variances, in addition to the second rotation that maximizes statistical independence, eliminating the subspace rotational indeterminacy.