×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Pipeline parallelism enables efficient training of Large Language Models
(LLMs) on large-scale distributed accelerator clusters. Yet, pipeline bubbles
during startup and tear-down reduce the utilization of accelerators. Although
efficient pipeline schemes with micro-batching and bidirectional pipelines have
been proposed to maximize utilization, a significant number of bubbles cannot
be filled using synchronous forward and backward passes. To address this
problem, we suggest that extra work be assigned to the bubbles to gain
auxiliary benefits in LLM training. As an example in this direction, we propose
PipeFisher, which assigns the work of K-FAC, a second-order optimization method
based on the Fisher information matrix, to the bubbles to accelerate
convergence. In Phase 1 pretraining of BERT-Base and -Large models, PipeFisher
reduces the (simulated) training time to 50-75% compared to training with a
first-order optimizer by greatly improving the accelerator utilization and
benefiting from the improved convergence by K-FAC.

Click here to read this post out
ID: 129850; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>