# Principal component analysis

## Laplacian Regularized Low-Rank Representation and Its Applications

We propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR).

## Integrated Low Rank Based Discriminative Feature Learning for Recognition

We present a supervised low-rank-based approach for learning discriminative features.

## A Regularized Approach for Geodesic Based Semi-Supervised Multi-Manifold Learning

We regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed.

## L1-Norm Kernel Discriminant Analysis Via Bayes Error Bound Optimization for Robust Feature Extraction

With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem.

## Fast Algorithms for Recovering a Corrupted Low-Rank Matrix

This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This paper develops and compares two complementary approaches for solving this problem by a convex programming. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem.

## Laplacian PCA and Its Applications

We develop the Laplacian PCA (LPCA) algorithm which is the extension of PCA to a more general form by locally optimizing the weighted scatter. In addition to the simplicity of PCA, the benefits brought by LPCA are twofold: the strong robustness against noise and the weak metric-dependence on sample spaces.