Flatiron Internal Conference: Flatiron-wide Algorithms and Mathematics

America/New_York
2nd Floor (Ingrid Daubechies Auditorium)

2nd Floor

Ingrid Daubechies Auditorium

162 5th avenue, 2nd floor, New York NY, 10010
Alex Barnett
Description

FWAM

Flatiron-wide Algorithms and Mathematics (FWAM) is a 2.5 day internal conference with the goal of overviewing/introducing a range of numerical algorithms and tools that are essential to research done at Flatiron and beyond. We also aim to form research connections across (and within) the centers, and showcase some of the research which makes use of these methods. Topics have been chosen that are crucial to two or more centers. There are five half-day topics; each begins with at least one accessible, practical, introductory lecture, then short talks that may teach sub-topics or applications to research.

 

Organization team:

Admin: Marian Jakubiak         SCC: Andras Pataki, Pat Gunn

CCA: Gabriella Contardo, Keaton Burns, Dan Foreman-Mackey

CCB: Mike Shelley, Mariano Gabitto

CCM: Manas Rachh, Alex Barnett

CCQ: Olivier Parcollet, Guiseppe Carleo

Wrangler-in-chief: Alex Barnett

 

Session chairs:

Wed am: Alex Barnett / Wed pm: Dan Foreman-Mackey

Thurs am: Olivier Parcollet / Thurs pm: Mike Shelley

Fri am: Gabriella Contardo

 

ZOOM DETAILS IF JOINING REMOTELY:

Join from PC, Mac, Linux, iOS or Android: https://simonsfoundation.zoom.us/j/536451221

Or Telephone:
    Dial(for higher quality, dial a number based on your current location):
        US: +1 646 558 8656  or +1 669 900 6833
    Meeting ID: 536 451 221
    International numbers available: https://zoom.us/u/ahu1dujLA

    • 08:15 09:00
      Breakfast 45m
    • 09:00 09:10
      Welcome
    • 09:10 10:10
      Optimization Introductory Lecture 1
      • 09:10
        Optimization landscapes - a gentle multi-disciplinary introduction to optimization 1h

        In this lecture, I will give an introduction to the field of continuous optimization. I will emphasize instances of optimization problems that appear in biology and physics through the concept of optimization landscapes. I will review sampling-based approaches as well as gradient-based methods and focus on concepts rather than derivations of specific algorithms. The lecture is intended to set the stage for the latter focused talks, and will provide links to other topics covered in the FWAM conference.

        Speaker: Christian L. Mueller (CCM / LMU)
    • 10:10 10:20
      Break
    • 10:20 11:20
      Optimization Introductory Lecture 2
    • 11:20 11:40
      Break
    • 11:40 12:30
      Optimization Short Talk
      • 11:40
        A practical introduction to adjoint methods 25m
        Speaker: Leslie Greengard (CCM / NYU)
      • 12:05
        Research Applications of Optimization: Data-Driven Spectroscopy 25m
        Speaker: Megan Bedell (CCA)
    • 12:30 14:00
      Lunch- 1h 30m
    • 14:00 15:00
      Function Approximation and Differential Equations Introductory Lecture
      • 14:00
        Introduction to interpolation, integration and spectral methods 1h

        I overview key concepts and practical methods for efficient and accurate numerical function approximation, integration and differentiation. This is the basis for spectral and other ODE/PDE solvers coming up in the next talk. I will teach concepts such as convergence rate, local/global, adaptivity, rounding error, polynomial and Fourier bases. The focus is on 1D, with pointers to higher-dimensional methods and codes.

        Lecture notes (see Lecture I) and codes for demo figures at:
        https://github.com/ahbarnett/fwam-numpde

        Speaker: Alex Barnett (CCM)
    • 15:00 15:10
      Break
    • 15:10 15:40
      Function Approximation and Differential Equations Introductory Lecture
      • 15:10
        Overview of various methods to solve differential equations 30m

        We overview various numerical methods to solve ODEs and PDEs.

        For source of notes see: https://github.com/ahbarnett/fwam-numpde

        Speaker: Keaton Burns (MIT / CCA)
    • 15:40 16:00
      Break
    • 16:00 17:15
      Function Approximation and Differential Equations Equations Short Talk
      • 16:00
        PDEs: The long and the short. 25m
        Speaker: Michael Shelley (CCB)
      • 16:25
        Introduction to Integral Equation Methods 25m
        Speaker: Jun Wang (CCM)
      • 16:50
        Wavelets 25m
        Speaker: Joakim Andén (CCM)
    • 17:15 18:15
      Reception 1h
    • 08:15 09:00
      Breakfast 45m
    • 09:00 09:45
      Sampling Introductory Lecture
      • 09:00
        Introduction to Markov chain Monte Carlo 45m
        Speaker: Dan Foreman-Mackey (CCA)
    • 09:45 09:55
      Break
    • 09:55 10:40
      Sampling Introductory Lecture
      • 09:55
        Scalable Bayesian Inference. 45m

        In this short tutorial, I will review variational inference (VI), a method to approximate posterior probability distributions through optimization. VI became popular as it provides faster convergence than more traditional sampling methods.

        This tutorial aims to provide both an introduction and an overview of recent developments. First, I will provide a review of variational inference. Second, I describe some popular advancements such as stochastic variational inference, and variational autoencoders. During the talk, I will establish some connections with mathematical problems in different centers at Flatiron.

        Speaker: Mariano Gabitto (CCB)
    • 10:40 11:00
      Break
    • 11:00 11:50
      Sampling Short Talk
      • 11:00
        The quantum-ness in quantum Monte Carlo: mathematical and algorithmic implications 40m
        Speaker: Prof. Shiwei Zhang (CCQ)
    • 11:50 12:10
      Sampling Short Talk
      • 11:50
        Hierarchical Modeling and Stellar Velocities 20m

        An introduction to Bayesian hierarchical modeling, with an example from my own research modeling repeated velocity measurements of distant stars in the Milky Way.

        Speaker: Emily Cunningham (CCA)
    • 12:10 14:00
      Lunch 1h 50m
    • 14:00 14:45
      Deep Learning Introductory Lecture
      • 14:00
        Introduction to Deep Learning 45m

        A Deep Learning 101 to get familiar with machine/deep learning principles, neural networks, back-propagation, convolution nets and representation learning.

        Speaker: Gabriella Contardo (CCA)
    • 14:45 14:55
      Break
    • 14:55 15:40
      Deep Learning Introductory Lecture
      • 14:55
        Introduction to Deep Learning 45m

        We will introduce state of the art deep learning methods and showcase some of its applications to astrophysical challenges.

        Speaker: Shirley Ho (CCA)
    • 15:40 16:00
      Break
    • 16:00 17:15
      Deep Learning Short Talks
      • 16:00
        Uncertainty Estimation with Neural Networks 25m
        Speaker: Laurence Levasseur (CCA / U. de Montreal)
      • 16:25
        Deep Generative Modeling for Statistical and Quantum Physics 25m
        Speaker: Giuseppe Carleo (CCQ)
      • 16:50
        Biological neural network algorithms 25m

        Although traditional artificial neural networks were inspired by the brain they resemble biological neural networks only superficially. Successful machine learning algorithms like backpropagation violate fundamental biophysical observations suggesting that the brain employs other algorithms to analyze high-dimensional datasets streamed by our sensory organs. We have been developing neuroscience-based machine learning by deriving algorithms and neural networks from objective functions based on the principle of similarity preservation. Similarity-based neural networks rely exclusively on biologically plausible local learning rules and solve important unsupervised learning tasks such as dimensionality reduction, clustering and manifold learning. In addition, to modeling biological networks, similarity-based algorithms are competitive for Big Data applications. For further information please see http://www.offconvex.org/2018/12/03/MityaNN2/

        Speaker: Mitya Chklovskii (CCB)
    • 17:15 18:15
      Reception 1h
    • 08:30 09:15
      Breakfast 45m
    • 09:15 10:30
      Dimension Reduction and Factorization Short Talk
      • 09:15
        Randomized linear algebra and matrix approximation 25m

        The goal of this talk is to show how probabilistic methods can be used to accelerate standard matrix factorizations (e.g. SVD) with provable characteristics in terms of speed and accuracy.

        Speaker: Eftychios Pnevmatikakis (CCM)
      • 09:40
        Spectral Clustering and Dimensionality Reduction 25m
        Speaker: Marina Spivak (CCM)
      • 10:05
        Clustering in low dimensions 25m

        I will focus on clustering data points in low dimensions (mostly 2d) and provide an overview of some popular clustering algorithms.

        The accompanying live notebook is linked from my homepage: https://users.flatironinstitute.org/~magland

        Speaker: Jeremy Magland (CCM)
    • 10:30 10:50
      Break
    • 10:50 11:35
      Dimension Reduction and Factorization Introductory Lecture
      • 10:50
        Fast algorithms for hierarchically structured matrices 45m

        In this talk, I will discuss what hierarchically structured matrices are, where they occur in practice, and present algorithms for factorizing these structured matrices. I will demonstrate how the factorization enables subsequent matrix operations (applying the matrix, computing its inverse, and its determinant) in linear CPU time.

        Speaker: Manas Rachh (CCM)
    • 11:35 12:25
      Dimension Reduction and Factorization Short Talk
      • 11:35
        The why and how of nonnegative matrix factorization 25m

        Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. I first illustrate this property of NMF on some applications. Then I address the problem of solving NMF, which is NP-hard in general, and review some standard NMF algorithms. Finally, I briefly describe an online NMF algorithm, which scales up gracefully to large data sets.

        Speaker: Johannes Friedrich (CCB)
      • 12:00
        Introduction to Tensor Network Methods 20m

        Tensor network methods are a family of variational algorithms used to simulate many body quantum systems in a variety of situations. With some brief motivation from physics, I'll explain why anyone would want to use these methods, why it is that they are so effective for certain classes of problems, and some extensions to other fields like machine learning.

        Speaker: Katharine Hyatt (CCQ)
    • 12:25 12:30
      No Lunch will be served: please order from Seamless 5m