×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

While Self-Supervised Learning has helped reap the benefit of the scale from
the available unlabeled data, the learning paradigms are continuously being
bettered. We present a new pre-training strategy named ccc-wav2vec 2.0, which
uses clustering and an augmentation-based cross-contrastive loss as its
self-supervised objective. Through the clustering module, we scale down the
influence of those negative examples that are highly similar to the positive.
The Cross-Contrastive loss is computed between the encoder output of the
original sample and the quantizer output of its augmentation and vice-versa,
bringing robustness to the pre-training strategy. ccc-wav2vec 2.0 achieves up
to 15.6% and 12.7% relative WER improvement over the baseline wav2vec 2.0 on
the test-clean and test-other sets, respectively, of LibriSpeech, without the
use of any language model. The proposed method also achieves up to 14.9%
relative WER improvement over the baseline wav2vec 2.0 when fine-tuned on
Switchboard data. We make all our codes publicly available on GitHub.

Click here to read this post out
ID: 129822; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>