Stephen Tu (Google)- The foundations of machine learning for feedback control
Recent breakthroughs in machine learning offer unparalleled optimism for the future capabilities of artificial intelligence. However, despite impressive progress, modern machine learning methods still operate under the fundamental assumption that the data at test time is generated by the same distribution from which training examples are collected. In order to build robust intelligent systems—self-driving vehicles, robotic assistants, smart grids—which safely interact with and control the surrounding environment, one must reason about the feedback effects of models deployed in closed-loop.
In this talk, I will discuss my work on developing a principled understanding of learning-based feedback systems, grounded within the context of robotics. First, motivated by the fact that many real world systems naturally produce sequences of data with long-range dependencies, I will present recent progress on the fundamental problem of learning from temporally correlated data streams. I will show that in many situations, learning from correlated data can be as efficient as if the data were independent. I will then examine how incremental stability—a core idea in classical control theory—can be used to study feedback-induced distribution shift. In particular, I will characterize how an expert policy’s stability properties affect the end-to-end sample complexity of imitation learning. I will conclude by showing how these insights lead to practical algorithms and data collection strategies for imitation learning.
Speakers
Stephen Tu
Stephen Tu is a research scientist at Robotics at Google in New York City. His research interests are focused on a principled understanding of the effects of using machine learning models for feedback control, with specific emphasis on robotics applications. He received his Ph.D. from the University of California, Berkeley in EECS under the supervision of Ben Recht.