A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions

December, 2021

Abstract

In a world of daily emerging scientific inquisition and discovery, the prolific launch of machine learning across industries comes to little surprise for those familiar with the potential of ML. Neither so should the congruent expansion of ethics-focused research that emerged as a response to issues of bias and unfairness that stemmed from those very same applications. Fairness research, which focuses on techniques to combat algorithmic bias, is now more supported than ever before. A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms. Nonetheless, there is a lack of application of these fairness solutions in practice. This systematic review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed. Moreover, this review provides an in-depth breakdown of the caveats to the solution space that have arisen since their release and a taxonomy of needs that have been proposed by machine learning practitioners, fairness researchers, and institutional stakeholders. These needs have been organized and addressed to the parties most influential to their implementation, which includes fairness researchers, organizations that produce ML algorithms, and the machine learning practitioners themselves. These findings can be used in the future to bridge the gap between practitioners and fairness experts and inform the creation of usable fair ML toolkits.

Resource Type: