×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

While a broad range of techniques have been proposed to tackle distribution
shift, the simple baseline of training on an $\textit{undersampled}$ balanced
dataset often achieves close to state-of-the-art-accuracy across several
popular benchmarks. This is rather surprising, since undersampling algorithms
discard excess majority group data. To understand this phenomenon, we ask if
learning is fundamentally constrained by a lack of minority group samples. We
prove that this is indeed the case in the setting of nonparametric binary
classification. Our results show that in the worst case, an algorithm cannot
outperform undersampling unless there is a high degree of overlap between the
train and test distributions (which is unlikely to be the case in real-world
datasets), or if the algorithm leverages additional structure about the
distribution shift. In particular, in the case of label shift we show that
there is always an undersampling algorithm that is minimax optimal. In the case
of group-covariate shift we show that there is an undersampling algorithm that
is minimax optimal when the overlap between the group distributions is small.
We also perform an experimental case study on a label shift dataset and find
that in line with our theory, the test accuracy of robust neural network
classifiers is constrained by the number of minority samples.

Click here to read this post out
ID: 214072; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: June 21, 2023, 7:33 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 11
CC:
No creative common's license
Comments: