Exploiting Computational Scale for Richer Model-Based Inference
Understanding the deluge of scientific data acquired from next-generation technologies, from astronomy to neuroscience, requires advances in translating our existing knowledge to useful models. Here I show how our recent advances in scalable computing, from “serverless” cloud offerings to deep function approximation, can let us capture and exploit this prior knowledge. Examples include models derived from human intuition (for neural connectomics), carefully-engineered physical systems (for imaging through scattering media), and even direct simulation (for superresolution microscopy). By expanding the space of models we can work with, we can avoid common data science pitfalls while making computing at scale accessible to the entire scientific community.
Host: Sanjay Krishnan
I am an associate professor in the Department of Computer Science here at the University of Chicago. My research interests include biological signal acqusition, inverse problems, machine learning, heliophysics, neuroscience, and other exciting ways of exploiting scalable computation to understand the world. Previously I was at the Berkeley Center for Computational Imaging and RISELab at UC Berkeley EECS working with Ben Recht.