AI Fairness for People with Disabilities

Register here.

Pittsburgh, PA - October 27, 2019

Artificial Intelligence (AI) is increasingly being used in decision-making that directly impacts people’s lives. Much has been written about the potential for AI methods to encode, perpetuate, and even amplify discrimination against marginalized groups in society. Like age, gender, and race, disability status is a protected characteristic, but there are some crucial differences between disability groups and other protected groups. One difference is the extreme diversity of ways disabilities manifest, and people adapt. Secondly, disability information is highly sensitive and not always shared, precisely because of the potential for discrimination; AI systems may often not have explicit information about disability that can be used to apply established fairness tests and corrections. Thirdly, some disabilities have relatively low rates of occurrence, so that  individuals can appear as data outliers rather than part of a recognizable subgroup.

 

This workshop will examine AI fairness, accountability, transparency, and ethics (FATE) for the specific situations of people with disabilities. We invite researchers, disability advocates, policy makers, and practitioners to submit position papers on topics including (but not limited to):

 

  • Definitions of fairness and how they apply in the disability space
  • Data privacy and control
  • Fairness policies
  • Measuring fairness for disability groups
  • Data collection
  • AI methods that support fairness, transparency, and explainability for disability use cases
  • Methods of handling outlier individuals
  • Ways of approaching fairness for disability groups in AI-based systems
  • Bias and discrimination case studies and experience reports

 

The purpose of this workshop is to develop community across disciplines in the area of AI FATE as it regards people with disabilities, towards developing new research directions and collaborations, and strategic action plans for increasing impact in research, industry, and policy. Immediate outcomes will be discussed in the workshop, and may include organizing a submission to, or special issue of, the ACM SIGACCESS Newsletter, or similar venue.

Application

 

Interested attendees should submit a short position paper (2-4 pages) using the ASSETS 2019 paper and poster format.

 

Please submit your paper (accessible PDF or MS Word) by email with subject line “AI Fairness Workshop” to: aiworkshop-assets19@acm.org. Use the ASSETS 2019 guidelines on making your paper accessible, and review guidelines for writing about accessibility.

 

Papers will be selected for presentation at the workshop based on their contribution and the overall balance of perspectives.

 

Important Dates

 

Submission of position paper:                        July 3rd

Notification of acceptance:                            July 31st

General registration opens:                                   August 15th

 

Attendees and costs

 

There is no cost to attend the workshop but places are limited.  Those selected to present their position papers will automatically be registered for the workshop. Attendees do not need to register for the ASSETS Conference.

 

Organizers

 

Shari Trewin, IBM Accessibility

Meredith Ringel Morris, Microsoft Research

Shiri Azenkot, Cornell Tech

Stacy Branham, University of California, Irvine

Nicole Bleuel, Google

Phill Jenkins, IBM Accessibility

Jeff Bigham, Apple Accessibility+ML Research

Walter S. Lasecki, University of Michigan

 

 

Artificial Intelligence (AI) is increasingly being used in decision-making that directly impacts people’s lives. Much has been written about the potential for AI methods to encode, perpetuate, and even amplify discrimination against marginalized groups in society. Like age, gender, and race, disability status is a protected characteristic, but there are some crucial differences between disability groups and other protected groups. One difference is the extreme diversity of ways disabilities manifest, and people adapt. Secondly, disability information is highly sensitive and not always shared, precisely because of the potential for discrimination; AI systems may often not have explicit information about disability that can be used to apply established fairness tests and corrections. Thirdly, some disabilities have relatively low rates of occurrence, so that  individuals can appear as data outliers rather than part of a recognizable subgroup.

 

This workshop will examine AI fairness, accountability, transparency, and ethics (FATE) for the specific situations of people with disabilities. We invite researchers, disability advocates, policy makers, and practitioners to submit position papers on topics including (but not limited to):

 

  • Definitions of fairness and how they apply in the disability space
  • Data privacy and control
  • Fairness policies
  • Measuring fairness for disability groups
  • Data collection
  • AI methods that support fairness, transparency, and explainability for disability use cases
  • Methods of handling outlier individuals
  • Ways of approaching fairness for disability groups in AI-based systems
  • Bias and discrimination case studies and experience reports

 

The purpose of this workshop is to develop community across disciplines in the area of AI FATE as it regards people with disabilities, towards developing new research directions and collaborations, and strategic action plans for increasing impact in research, industry, and policy. Immediate outcomes will be discussed in the workshop, and may include organizing a submission to, or special issue of, the ACM SIGACCESS Newsletter, or similar venue.

Application

 

Interested attendees should submit a short position paper (2-4 pages) using the ASSETS 2019 paper and poster format.

 

Please submit your paper (accessible PDF or MS Word) by email with subject line “AI Fairness Workshop” to: aiworkshop-assets19@acm.org. Use the ASSETS 2019 guidelines on making your paper accessible, and review guidelines for writing about accessibility.

 

Papers will be selected for presentation at the workshop based on their contribution and the overall balance of perspectives.

 

Important Dates

 

Submission of position paper:                        July 3rd

Notification of acceptance:                            July 31st

General registration opens:                                   August 15th

 

Attendees and costs

 

There is no cost to attend the workshop but places are limited.  Those selected to present their position papers will automatically be registered for the workshop. Attendees do not need to register for the ASSETS Conference.

 

Organizers

 

Shari Trewin, IBM Accessibility

Meredith Ringel Morris, Microsoft Research

Shiri Azenkot, Cornell Tech

Stacy Branham, University of California, Irvine

Nicole Bleuel, Google

Phill Jenkins, IBM Accessibility

Jeff Bigham, Apple Accessibility+ML Research

Walter S. Lasecki, University of Michigan

Date: 
Sunday, October 27, 2019 - 7:45am to 7:00pm