Chi Jin (UC Berkeley) - Machine Learning: Why Do Simple Algorithms Work So Well?
Machine Learning: Why Do Simple Algorithms Work So Well?
While state-of-the-art machine learning models are deep, large-scale, sequential and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, or Q-learning (in the case of reinforcement learning tasks). A basic question endures—why do simple algorithms work so well even in these challenging settings?
This talk focuses on two fundamental problems: (1) in nonconvex optimization, can gradient descent escape saddle points efficiently? (2) in reinforcement learning, is Q-learning sample efficient? We will provide the first line of provably positive answers to both questions. In particular, we will show that simple modifications to these classical algorithms guarantee significantly better properties, which explains the underlying mechanisms behind their favorable performance in practice.
Host: Rebecca Willett
Chi Jin
Chi Jin is a Ph.D. candidate in Computer Science at UC Berkeley, advised by Michael I. Jordan. He received a B.S. in Physics from Peking University. His research interests lie in machine learning, statistics, and optimization, with his PhD work primarily focused on nonconvex optimization and reinforcement learning.