Transductive Robust Learning Guarantees
We study the problem of adversarially robust learning in the transductive setting. For classes H of bounded VC dimension, we propose a simple transductive learner that when presented with a set of labeled training examples and a set of unlabeled test examples (both sets possibly adversarially perturbed), it correctly labels the test examples with a robust error rate that is linear in the VC dimension and is adaptive to the complexity of the perturbation set. This result provides an exponential improvement in dependence on VC dimension over the best known upper bound on the robust error in the inductive setting, at the expense of competing with a more restrictive notion of optimal robust error.
Joint work with Steve Hanneke and Nathan Srebro (https://arxiv.org/abs/2110.10602).
Presence at TTIC requires being fully vaccinated for COVID-19 or having a TTIC or UChicago-approved exemption. Masks are required in all common areas. Full visitor guidance available at ttic.edu/visitors.
For Zoom information, contact Denise Howard, email@example.com
Host: Machine Learning Seminar Series
Omar Montasser is a fifth year PhD student at TTI-Chicago advised by Nathan Srebro. His main research interest is the theory of machine learning. Recently, his research focused on understanding and characterizing adversarially robust learning, and designing algorithms with provable robustness guarantees under different settings. His work has been recognized by a best student paper award at COLT (2019).