Firm Foundations for Private Machine Learning and Statistics
How can researchers use sensitive datasets for machine learning and statistics without compromising the privacy of the individuals who contribute their data? In this talk I will describe my work on the foundations of differential privacy—a rigorous framework for answering this question. In the past decade, differential privacy has gone from largely theoretical to widely deployed, and a theme of the talk will be how new deployments are forcing us to revisit foundational questions about differential privacy. This talk will cover a range of issues from fundamental—like the minimax error rate and necessary assumptions for private statistical inference—to applied—like auditing the privacy properties of algorithms for training neural networks.
Host: Avrim Blum
Jonathan Ullman is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. Before joining Northeastern, he received his PhD from Harvard in 2013, and in 2014 was a Junior Fellow in the Simons Society of Fellows. His research centers on privacy for machine learning and statistics, and its surprising connections to topics like statistical validity, robustness, cryptography, and fairness. He has been recognized with an NSF CAREER award and the Ruth and Joel Spira Outstanding Teacher Award.