TY - GEN
T1 - HyperFA∗IR
T2 - 8th Annual ACM Conference on Fairness, Accountability, and Transparency, FAccT 2025
AU - Cartier Van Dissel, Mauritz N.
AU - Martin-Gutierrez, Samuel
AU - Espín-Noboa, Lisette
AU - Jaramillo, Ana María
AU - Karimi, Fariba
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/6/23
Y1 - 2025/6/23
N2 - Ranking algorithms play a pivotal role in decision-making processes across diverse domains, from search engines to job applications. When rankings directly impact individuals, ensuring fairness becomes essential, particularly for groups that are marginalised or misrepresented in the data. Most of the existing group fairness frameworks often rely on ensuring proportional representation of protected groups. However, these approaches face limitations in accounting for the stochastic nature of ranking processes or the finite size of candidate pools. To this end, we present hyperFA∗IR, a framework for assessing and enforcing fairness in rankings drawn from a finite set of candidates. It relies on a generative process based on the hypergeometric distribution, which models real-world scenarios by sampling without replacement from fixed group sizes. This approach improves fairness assessment when top-k selections are large relative to the pool or when protected groups are small. We compare our approach to the widely used binomial model, which treats each draw as independent with fixed probability, and demonstrate-both analytically and empirically-that our method more accurately reproduces the statistical properties of sampling from a finite population. To operationalise this framework, we propose a Monte Carlo-based algorithm that efficiently detects unfair rankings by avoiding computationally expensive parameter tuning. Finally, we adapt our generative approach to define affirmative action policies by introducing weights into the sampling process.
AB - Ranking algorithms play a pivotal role in decision-making processes across diverse domains, from search engines to job applications. When rankings directly impact individuals, ensuring fairness becomes essential, particularly for groups that are marginalised or misrepresented in the data. Most of the existing group fairness frameworks often rely on ensuring proportional representation of protected groups. However, these approaches face limitations in accounting for the stochastic nature of ranking processes or the finite size of candidate pools. To this end, we present hyperFA∗IR, a framework for assessing and enforcing fairness in rankings drawn from a finite set of candidates. It relies on a generative process based on the hypergeometric distribution, which models real-world scenarios by sampling without replacement from fixed group sizes. This approach improves fairness assessment when top-k selections are large relative to the pool or when protected groups are small. We compare our approach to the widely used binomial model, which treats each draw as independent with fixed probability, and demonstrate-both analytically and empirically-that our method more accurately reproduces the statistical properties of sampling from a finite population. To operationalise this framework, we propose a Monte Carlo-based algorithm that efficiently detects unfair rankings by avoiding computationally expensive parameter tuning. Finally, we adapt our generative approach to define affirmative action policies by introducing weights into the sampling process.
KW - algorithmic fairness
KW - bias in computer systems
KW - group fairness
KW - hypergeometric distribution
KW - ranking
KW - top-k selection
UR - https://www.scopus.com/pages/publications/105010820634
U2 - 10.1145/3715275.3732143
DO - 10.1145/3715275.3732143
M3 - Conference contribution
AN - SCOPUS:105010820634
T3 - ACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
SP - 2112
EP - 2126
BT - FAccT '25 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
PB - Association for Computing Machinery
Y2 - 23 June 2025 through 26 June 2025
ER -