×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Fairness-aware machine learning has garnered significant attention in recent
years because of extensive use of machine learning in sensitive applications
like judiciary systems. Various heuristics, and optimization frameworks have
been proposed to enforce fairness in classification \cite{del2020review} where
the later approaches either provides empirical results or provides fairness
guarantee for the exact minimizer of the objective function
\cite{celis2019classification}. In modern machine learning, Stochastic Gradient
Descent (SGD) type algorithms are almost always used as training algorithms
implying that the learned model, and consequently, its fairness properties are
random. Hence, especially for crucial applications, it is imperative to
construct Confidence Interval (CI) for the fairness of the learned model. In
this work we provide CI for test unfairness when a group-fairness-aware,
specifically, Disparate Impact (DI), and Disparate Mistreatment (DM) aware
linear binary classifier is trained using online SGD-type algorithms. We show
that asymptotically a Central Limit Theorem holds for the estimated model
parameter of both DI and DM-aware models. We provide online multiplier
bootstrap method to estimate the asymptotic covariance to construct online CI.
To do so, we extend the known theoretical guarantees shown on the consistency
of the online bootstrap method for unconstrained SGD to constrained
optimization which could be of independent interest. We illustrate our results
on synthetic and real datasets.

Click here to read this post out
ID: 94853; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: April 29, 2023, 7:36 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 12
CC:
No creative common's license
Comments: