Michael Maire (UChicago) - Emergent Neural Capabilities from Architectural Foundations
Emergent Neural Capabilities from Architectural Foundations
I will describe our recent work on endowing neural networks with new high-level capabilities that emerge as a consequence of principled modifications to their architecture. One such effort extends convolutional networks to encompass long-term, large-scale memory subsystems. Distinct from strategies that connect neural networks to external memory banks via intricately crafted controllers and hand-designed attention mechanisms, our memory is internal, distributed, co-located alongside computation, and implicitly addressed, while being drastically simpler than prior methods. Another effort accelerates training using an approach that unifies network pruning and architecture search by augmenting a network with learned parameters governing sparsification. I will discuss the potential for additional advances that may arise from the combination of these techniques.
Also available to view via livestream:
Michael Maire
Michael Maire is an assistant professor in the Department of Computer Science at the University of Chicago. He was previously a research assistant professor at the Toyota Technological Institute at Chicago (TTIC), where he maintains a courtesy appointment. Prior to TTIC, he was a postdoctoral scholar at the California Institute of Technology. He received a PhD in computer science from the University of California, Berkeley in 2009. His research interests span computer vision, with emphasis on perceptual organization and object recognition, and deep learning, with focus on neural network architectures and optimization.