Assess and Manage Subgroup Bias
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.
Intent
Motivation
Applicability
Description
Subgroup bias can arise from improperly divided groups, often defined in order to avoid group bias, or due to a lack of data. For example, consider an application where we divide the data based on location in New York and Amsterdam. After division, it may be the case that we only have data where the population of New York is predominantly female, and the population of Amsterdam is predominantly male. This division introduces a subgroup bias, which ultimately leads to socially biased models.
In order to avoid subgroup bias, it is imperative to test, assess and calibrate the models as in the case of social bias.
Follow the references in order to learn more about technical approaches to ensure fair predictions for every sub-population which can be identified in a set of groups.
Adoption
Related
- Test for Social Bias in Training Data
- Prevent Discriminatory Data Attributes Used As Model Features
- Perform Risk Assessments
Read more
- Multicalibration: Calibration for the (Computationally-Identifiable) Masses
- Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.