Scientific Computing Seminar (SCS)

SCS: Leo Zepeda-Nunez (Google / U. Wisconsin Madison)

America/New_York
3rd Floor Classroom/3-Flatiron Institute (162 5th Avenue)

3rd Floor Classroom/3-Flatiron Institute

162 5th Avenue

40
Description

Leo will tell us about recent work in deep learning for inverse-scattering.

 

Title:  Advances in latent representation learning for dynamical systems and inverse problems.

 

Abstract: Computing latent representations of physical processes plays an important role in scientific machine learning, which often leverages such representations to bypass otherwise expensive computations. Unfortunately such representations are seldom unique, thus machine-learning approaches usually require domain-specific regularization in the form of inductive biases.

In this talk we present two applications of latent representations in two different contexts: data-driven discovery of dynamical systems and inverse scattering in the super-resolution regime. For each one, we discuss the inductive biases necessary for the tasks at hand, their rationale, and we provide numerical evidence of their effectiveness.

For the first application we present a surrogate model for approximating the inverse scattering map. We leverage the multi-level nature of wide-band signals with the butterfly factorization to design a neural network architecture, whose weights are the latent representation of the inverse scattering map. The network is able to efficiently bypass the expensive optimization loop of traditional inversion pipelines, while being able to super-resolve scatterers with sub-Nyquist features, provided that they are parametrized by a, possibly unknown, relatively low-dimensional manifold.

For the second application, we present a data-driven, space-time continuous framework to learn surrogate models for complex physical systems described by advection-dominated PDEs. In particular, we construct hypernetwork-based latent dynamical models directly on the parameter space of a compact representation network. We leverage the expressive power of the network and a specially designed consistency-inducing regularization to obtain latent trajectories that are both low-dimensional and smooth, which in return, render our surrogate models highly efficient at inference time. 

 

See https://arxiv.org/abs/2212.06068