Designing for the Last Mile in Machine Learning
Machine learning is now a general-purpose technology. In many domains, we can build models to support important decisions or automate routine tasks. Yet we may not reap their benefits due to disuse, or even inflict harm due to misuse. In this talk, I will present methodological advances that address these “last mile” challenges. First, I will describe a method to learn simple risk scores that are readily adopted for medical decision support, and discuss its applications to adult ADHD screening and ICU seizure prediction. Next, I will describe how machine learning models may harm individuals in consumer-facing applications by violating their right to autonomy. I will then introduce the notion of “recourse” and formalize methods to prevent such harms without interfering in model development.
If you are affiliated with UChicago CS and would like to attend this talk remotely, contact email@example.com for links.
Host: Nick Feamster
Berk Ustun is a postdoc at the Harvard Center for Research on Computation and Society. His research interests are in machine learning, optimization, and human-centered design. In particular, he develops methods to promote the adoption and responsible use of machine learning in domains such as medicine, consumer finance, and criminal justice. Berk has built machine learning systems that are now used by major healthcare providers for hospital readmissions prediction, ICU seizure prediction, and adult ADHD screening. His work has been covered by various media outlets, including NPR and Wired, and has won major awards, including the INFORMS Informative Applications in Analytics Award in 2016 and 2019, and the INFORMS Computing Society Best Student Paper. Berk holds a PhD in Electrical Engineering and Computer Science from MIT, an MS in Computation for Design and Optimization from MIT, and BS degrees in Operations Research and Economics from UC Berkeley.