×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Probabilistic predictions can be evaluated through comparisons with observed
label frequencies, that is, through the lens of calibration. Recent scholarship
on algorithmic fairness has started to look at a growing variety of
calibration-based objectives under the name of multi-calibration but has still
remained fairly restricted. In this paper, we explore and analyse forms of
evaluation through calibration by making explicit the choices involved in
designing calibration scores. We organise these into three grouping choices and
a choice concerning the agglomeration of group errors. This provides a
framework for comparing previously proposed calibration scores and helps to
formulate novel ones with desirable mathematical properties. In particular, we
explore the possibility of grouping datapoints based on their input features
rather than on predictions and formally demonstrate advantages of such
approaches. We also characterise the space of suitable agglomeration functions
for group errors, generalising previously proposed calibration scores.
Complementary to such population-level scores, we explore calibration scores at
the individual level and analyse their relationship to choices of grouping. We
draw on these insights to introduce and axiomatise fairness deviation measures
for population-level scores. We demonstrate that with appropriate choices of
grouping, these novel global fairness scores can provide notions of (sub-)group
or individual fairness.

Click here to read this post out
ID: 129911; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>