Click here to flash read.
Fairness-aware machine learning has garnered significant attention in recent
years because of extensive use of machine learning in sensitive applications
like judiciary systems. Various heuristics, and optimization frameworks have
been proposed to enforce fairness in classification \cite{del2020review} where
the later approaches either provides empirical results or provides fairness
guarantee for the exact minimizer of the objective function
\cite{celis2019classification}. In modern machine learning, Stochastic Gradient
Descent (SGD) type algorithms are almost always used as training algorithms
implying that the learned model, and consequently, its fairness properties are
random. Hence, especially for crucial applications, it is imperative to
construct Confidence Interval (CI) for the fairness of the learned model. In
this work we provide CI for test unfairness when a group-fairness-aware,
specifically, Disparate Impact (DI), and Disparate Mistreatment (DM) aware
linear binary classifier is trained using online SGD-type algorithms. We show
that asymptotically a Central Limit Theorem holds for the estimated model
parameter of both DI and DM-aware models. We provide online multiplier
bootstrap method to estimate the asymptotic covariance to construct online CI.
To do so, we extend the known theoretical guarantees shown on the consistency
of the online bootstrap method for unconstrained SGD to constrained
optimization which could be of independent interest. We illustrate our results
on synthetic and real datasets.
No creative common's license