Avatar

Huan Li

lihuanss AT nuaa.edu.cn

Associate Professor

Nankai University

Biography

“I am an associate professor at Nankai University. I have graduated from Peking University (PKU) supervised by Prof. Zhouchen Lin. Before studying in PKU, I obtained my bachelor degree from Central South University in 2011.

Interests

  • Optimization
  • Machine learning

Education

  • Ph.D in Computer Engineering, 2015-2019

    Peking University

  • MSc in Computer Engineering, 2011-2014

    Peking University

  • BSc in Medical Information, 2007-2011

    Central South University

Publications @ZERO Lab

Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(ε−7/4) Complexity. JMLR, 2023.

This paper studies accelerated gradient methods for nonconvex optimization with Lipschitz continuous gradient and Hessian. We propose …

Alternating Direction Method of Multipliers for Machine Learning. Springer Singapore, 2022.

Machine learning heavily relies on optimization algorithms to solve its learning models. Constrained problems constitute a major type …

Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization. JMLR, 2022.

We study stochastic decentralized optimization for the problem of training machine learning models with large-scale distributed data. …

Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters. IEEE T. Signal Processing, 2020.

In this paper, we study the communication and (sub)gradient computation costs in distributed optimization and give a sharp complexity …

Accelerated First-Order Optimization Algorithms for Machine Learning. P IEEE, 2020.

Numerical optimization serves as one of the pillars of machine learning. To meet the demands of big data applications, lots of efforts …

Accelerated Optimization for Machine Learning: First-Order Algorithms. Springer Singapore, 2020.

This book on optimization includes forewords by Michael I. Jordan, Zongben Xu and Zhi-Quan Luo. Machine learning relies heavily on …

On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent. JMLR, 2020.

Our study aims to give the convergence rate analysis of the primal solutions for the accelerated randomized dual coordinate ascent.

Revisiting EXTRA for Smooth Distributed Optimization. SIOPT, 2020.

EXTRA is a popular method for dencentralized distributed optimization and has broad applications. This paper revisits EXTRA. First, we …

Accelerated Alternating Direction Method of Multipliers:An Optimal O(1/K) Nonergodic Analysis. JSC, 2019.

The Alternating Direction Method of Multipliers (ADMM) is widely used for linearly constrained convex problems. It is proven to have an …

Accelerated Alternating Direction Method of Multipliers: an Optimal O(1/K) Nonergodic Analysis. J SCI COMPUT, 2019.

The Alternating Direction Method of Multipliers (ADMM) is widely used for linearly constrained convex problems. It is proven to have an …

Construction of Incoherent Dictionaries via Direct Babel Function Minimization. ACML, 2018.

We propose an augmented Lagrange multiplier based algorithm to solve this nonconvex and nonsmooth problem with the convergence …

Optimization Algorithm Inspired Deep Neural Network Structure Design. ACML, 2018.

In this paper, we propose the hypothesis that the neural network structure design can be inspired by optimization algorithms and a …

Optimized Projections for Compressed Sensing via Direct Mutual Coherence Minimization. SP, 2018.

We propose to find an optimal projection matrix by minimizing the mutual coherence of PD directly to recover the signal from a small …

Provable Accelerated Gradient Method for Nonconvex Low Rank Optimization. ML, 2017.

Optimization over low rank matrices has broad applications in machine learning. For large scale problems, an attractive heuristic is to …

Fast Proximal Linearized Alternating Direction Method of Multiplier with Parallel Splitting. AAAI, 2016.

We propose the Fast Proximal Augmented Lagragian Method (Fast PALM) which achieves the convergence rate O(1/K^2), compared with O(1/K) …

Accelerated Proximal Gradient Methods for Nonconvex Programming. NIPS, 2015.

We extend APG for general nonconvex and nonsmooth programs by introducing a monitor that satisfies the sufficient descent property

Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning. ML, 2014.

We propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently. We …