×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Incorporating contrastive learning objectives in sentence representation
learning (SRL) has yielded significant improvements on many sentence-level NLP
tasks. However, it is not well understood why contrastive learning works for
learning sentence-level semantics. In this paper, we aim to help guide future
designs of sentence representation learning methods by taking a closer look at
contrastive SRL through the lens of isotropy, contextualization and learning
dynamics. We interpret its successes through the geometry of the representation
shifts and show that contrastive learning brings isotropy, and drives high
intra-sentence similarity: when in the same sentence, tokens converge to
similar positions in the semantic space. We also find that what we formalize as
"spurious contextualization" is mitigated for semantically meaningful tokens,
while augmented for functional ones. We find that the embedding space is
directed towards the origin during training, with more areas now better
defined. We ablate these findings by observing the learning dynamics with
different training temperatures, batch sizes and pooling methods.

Click here to read this post out
ID: 161736; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: May 30, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 10
CC:
No creative common's license
Comments: