We propose to use a family of nonconvex surrogates of L 0 -norm on the singular values of a matrix to approximate the rank function.
This paper aims at constructing a good graph to discover the intrinsic data structures under a semisupervised learning setting.
We presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms.
This paper proposes a new approach to theoretically analyze compressive sensing directly from the randomly sampling matrix phi instead of a certain recovery algorithm. Taking anyone of source bits, we can constitute a tree by parsing the randomly sampling matrix, where the selected source bit as the root.
This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This paper develops and compares two complementary approaches for solving this problem by a convex programming. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem.