Failure Mode and Effect Analysis (FMEA) has been used as an engineering risk assessment tool since 1949. FMEAs are effective in pre-emptively identifying and addressing how a device might fail and are often used in the design of high-risk technology applications such as military, automotive, and medical industries. In this talk, we explore how FMEAs can be used to identify social and ethical failures of Artificial Intelligent Systems (AISs) when it is interacting with a human partner. I will begin by proposing a process for developing a Social FMEAs (So-FMEAs) by building on the existing FMEA framework and the concept of Social Failure Modes coined by Millar. Using this proposed process, I will discuss a simple proof-of-concept focusing on failures in human-AI interaction.
This talk is remote. For Zoom information, subscribe to the UChicago HCI Club mailing list.
Shalaleh is a PhD candidate in electrical and computer engineering at McGill University and Mila where she works on interdisciplinary research challenges with my colleagues at the Responsible Autonomy and Intelligent Systems Ethics (RAISE Lab). Her current research focuses on how development teams can characterize, identify and mitigate social and ethical failures of machine learning (ML) systems as early as possible in the ML development process. Moreover, she is interested in investigating how these types of failures reveal themselves in human-ML interactions. Prior to starting her PhD, she co-founded Generation R consulting, a boutique AI ethics consultancy and was a design researcher with Open Roboethics Institute, on a full time basis.
She is currently the executive director at the Open Roboethics Institute (ORI), previously Open Roboethics initiative (ORi) which was established in 2012. ORI is a Canadian not for profit organization focusing on education and public engagement initiatives.