×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Our research focuses on solving the zero-shot text classification problem in
NLP, with a particular emphasis on innovative self-training strategies. To
achieve this objective, we propose a novel self-training strategy that uses
labels rather than text for training, significantly reducing the model's
training time. Specifically, we use categories from Wikipedia as our training
set and leverage the SBERT pre-trained model to establish positive correlations
between pairs of categories within the same text, facilitating associative
training. For new test datasets, we have improved the original self-training
approach, eliminating the need for prior training and testing data from each
target dataset. Instead, we adopt Wikipedia as a unified training dataset to
better approximate the zero-shot scenario. This modification allows for rapid
fine-tuning and inference across different datasets, greatly reducing the time
required for self-training. Our experimental results demonstrate that this
method can adapt the model to the target dataset within minutes. Compared to
other BERT-based transformer models, our approach significantly reduces the
amount of training data by training only on labels, not the actual text, and
greatly improves training efficiency by utilizing a unified training set.
Additionally, our method achieves state-of-the-art results on both the Yahoo
Topic and AG News datasets.

Click here to read this post out
ID: 301622; Unique Viewers: 0
Voters: 0
Latest Change: July 31, 2023, 7:30 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>