×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Parallel training of neural networks at scale is challenging due to
significant overheads arising from communication. Recently, deep learning
researchers have developed a variety of pruning algorithms that are capable of
pruning (i.e. setting to zero) 80-90% of the parameters in a neural network to
yield sparse subnetworks that equal the accuracy of the unpruned parent
network. In this work, we propose a novel approach that exploits these sparse
subnetworks to optimize the memory utilization and communication in two popular
algorithms for parallel deep learning namely -- data and inter-layer
parallelism. We integrate our approach into AxoNN, a highly scalable framework
for parallel deep learning that relies on data and inter-layer parallelism, and
demonstrate the reduction in communication time and memory utilization. On 512
NVIDIA V100 GPUs, our optimizations reduce the memory consumption of a 2.7
billion parameter model by 74%, and the total communication time by 40%, thus
providing an overall speedup of 34% over AxoNN, 32% over DeepSpeed-3D and 46%
over Sputnik, a sparse matrix computation baseline.

Click here to read this post out
ID: 129912; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>