×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

arXiv:2110.05282v4 Announce Type: replace
Abstract: In this paper, we focus on solving the decentralized optimization problem of minimizing the sum of $n$ objective functions over a multi-agent network. The agents are embedded in an undirected graph where they can only send/receive information directly to/from their immediate neighbors. Assuming smooth and strongly convex objective functions, we propose an Optimal Gradient Tracking (OGT) method that achieves the optimal gradient computation complexity $O\left(\sqrt{\kappa}\log\frac{1}{\epsilon}\right)$ and the optimal communication complexity $O\left(\sqrt{\frac{\kappa}{\theta}}\log\frac{1}{\epsilon}\right)$ simultaneously, where $\kappa$ and $\frac{1}{\theta}$ denote the condition numbers related to the objective functions and the communication graph, respectively. To our knowledge, OGT is the first single-loop decentralized gradient-type method that is optimal in both gradient computation and communication complexities. The development of OGT involves two building blocks which are also of independent interest. The first one is another new decentralized gradient tracking method termed "Snapshot" Gradient Tracking (SS-GT), which achieves the gradient computation and communication complexities of $O\left(\sqrt{\kappa}\log\frac{1}{\epsilon}\right)$ and $O\left(\frac{\sqrt{\kappa}}{\theta}\log\frac{1}{\epsilon}\right)$, respectively. SS-GT can be potentially extended to more general settings compared to OGT. The second one is a technique termed Loopless Chebyshev Acceleration (LCA) which can be implemented "looplessly" but achieve similar effect with adding multiple inner loops of Chebyshev acceleration in the algorithms. In addition to SS-GT, this LCA technique can accelerate many other gradient tracking based methods with respect to the graph condition number $\frac{1}{\theta}$.

Click here to read this post out
ID: 818418; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: April 23, 2024, 7:33 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 6
CC:
No creative common's license
Comments: