Click here to flash read.
Suppose we are given two datasets: a labeled dataset and unlabeled dataset
which also has additional auxiliary features not present in the first dataset.
What is the most principled way to use these datasets together to construct a
predictor?
The answer should depend upon whether these datasets are generated by the
same or different distributions over their mutual feature sets, and how similar
the test distribution will be to either of those distributions. In many
applications, the two datasets will likely follow different distributions, but
both may be close to the test distribution. We introduce the problem of
building a predictor which minimizes the maximum loss over all probability
distributions over the original features, auxiliary features, and binary
labels, whose Wasserstein distance is $r_1$ away from the empirical
distribution over the labeled dataset and $r_2$ away from that of the unlabeled
dataset. This can be thought of as a generalization of distributionally robust
optimization (DRO), which allows for two data sources, one of which is
unlabeled and may contain auxiliary features.
No creative common's license