Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters

Abstract

In this paper, we study the communication and (sub)gradient computation costs in distributed optimization and give a sharp complexity analysis for the proposed distributed accelerated gradient methods. We present two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters. Our first algorithm is for smooth distributed optimization and it obtains the near optimal O(Lϵ(1−σ2(W))‾‾‾‾‾‾‾‾‾√log1ϵ) communication complexity and the optimal O(Lϵ‾‾√) gradient computation complexity for L-smooth convex problems, where σ2(W) denotes the second largest singular value of the weight matrix W associated to the network and ϵ is the target accuracy. When the problem is μ-strongly convex and L-smooth, our algorithm has the near optimal O(Lμ(1−σ2(W))‾‾‾‾‾‾‾‾‾√log21ϵ) complexity for communications and the optimal O(Lμ‾‾√log1ϵ) complexity for gradient computations. Our communication complexities are only worse by a factor of (log1ϵ) than the lower bounds for the smooth distributed optimization. %As far as we know, our method is the first to achieve both communication and gradient computation lower bounds up to an extra logarithm factor for smooth distributed optimization. Our second algorithm is designed for non-smooth distributed optimization and it achieves both the optimal O(1ϵ1−σ2(W)√) communication complexity and O(1ϵ2) subgradient computation complexity, which match the communication and subgradient computation complexity lower bounds for non-smooth distributed optimization.

Publication
IEEE Transactions on Signal Processing
Next
Previous