- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Flatiron-wide Algorithms and Mathematics (FWAM) is a 2.5 day internal conference with the goal of overviewing/introducing a range of numerical algorithms and tools that are essential to research done at Flatiron and beyond. We also aim to form research connections across (and within) the centers, and showcase some of the research which makes use of these methods. Topics have been chosen that are crucial to two or more centers. There are five half-day topics; each begins with at least one accessible, practical, introductory lecture, then short talks that may teach sub-topics or applications to research.
Organization team:
Admin: Marian Jakubiak SCC: Andras Pataki, Pat Gunn
CCA: Gabriella Contardo, Keaton Burns, Dan Foreman-Mackey
CCB: Mike Shelley, Mariano Gabitto
CCM: Manas Rachh, Alex Barnett
CCQ: Olivier Parcollet, Guiseppe Carleo
Wrangler-in-chief: Alex Barnett
Session chairs:
Wed am: Alex Barnett / Wed pm: Dan Foreman-Mackey
Thurs am: Olivier Parcollet / Thurs pm: Mike Shelley
Fri am: Gabriella Contardo
ZOOM DETAILS IF JOINING REMOTELY:
Join from PC, Mac, Linux, iOS or Android: https://simonsfoundation.zoom.us/j/536451221
Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 646 558 8656 or +1 669 900 6833
Meeting ID: 536 451 221
International numbers available: https://zoom.us/u/ahu1dujLA
In this lecture, I will give an introduction to the field of continuous optimization. I will emphasize instances of optimization problems that appear in biology and physics through the concept of optimization landscapes. I will review sampling-based approaches as well as gradient-based methods and focus on concepts rather than derivations of specific algorithms. The lecture is intended to set the stage for the latter focused talks, and will provide links to other topics covered in the FWAM conference.
A 1-hour version of the MLSS Buenos Aires tutorial notes, focusing on parts 1 and 4, available at the links below.
I overview key concepts and practical methods for efficient and accurate numerical function approximation, integration and differentiation. This is the basis for spectral and other ODE/PDE solvers coming up in the next talk. I will teach concepts such as convergence rate, local/global, adaptivity, rounding error, polynomial and Fourier bases. The focus is on 1D, with pointers to higher-dimensional methods and codes.
Lecture notes (see Lecture I) and codes for demo figures at:
https://github.com/ahbarnett/fwam-numpde
We overview various numerical methods to solve ODEs and PDEs.
For source of notes see: https://github.com/ahbarnett/fwam-numpde
In this short tutorial, I will review variational inference (VI), a method to approximate posterior probability distributions through optimization. VI became popular as it provides faster convergence than more traditional sampling methods.
This tutorial aims to provide both an introduction and an overview of recent developments. First, I will provide a review of variational inference. Second, I describe some popular advancements such as stochastic variational inference, and variational autoencoders. During the talk, I will establish some connections with mathematical problems in different centers at Flatiron.
An introduction to Bayesian hierarchical modeling, with an example from my own research modeling repeated velocity measurements of distant stars in the Milky Way.
A Deep Learning 101 to get familiar with machine/deep learning principles, neural networks, back-propagation, convolution nets and representation learning.
We will introduce state of the art deep learning methods and showcase some of its applications to astrophysical challenges.
Although traditional artificial neural networks were inspired by the brain they resemble biological neural networks only superficially. Successful machine learning algorithms like backpropagation violate fundamental biophysical observations suggesting that the brain employs other algorithms to analyze high-dimensional datasets streamed by our sensory organs. We have been developing neuroscience-based machine learning by deriving algorithms and neural networks from objective functions based on the principle of similarity preservation. Similarity-based neural networks rely exclusively on biologically plausible local learning rules and solve important unsupervised learning tasks such as dimensionality reduction, clustering and manifold learning. In addition, to modeling biological networks, similarity-based algorithms are competitive for Big Data applications. For further information please see http://www.offconvex.org/2018/12/03/MityaNN2/
The goal of this talk is to show how probabilistic methods can be used to accelerate standard matrix factorizations (e.g. SVD) with provable characteristics in terms of speed and accuracy.
I will focus on clustering data points in low dimensions (mostly 2d) and provide an overview of some popular clustering algorithms.
The accompanying live notebook is linked from my homepage: https://users.flatironinstitute.org/~magland
In this talk, I will discuss what hierarchically structured matrices are, where they occur in practice, and present algorithms for factorizing these structured matrices. I will demonstrate how the factorization enables subsequent matrix operations (applying the matrix, computing its inverse, and its determinant) in linear CPU time.
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. I first illustrate this property of NMF on some applications. Then I address the problem of solving NMF, which is NP-hard in general, and review some standard NMF algorithms. Finally, I briefly describe an online NMF algorithm, which scales up gracefully to large data sets.
Tensor network methods are a family of variational algorithms used to simulate many body quantum systems in a variety of situations. With some brief motivation from physics, I'll explain why anyone would want to use these methods, why it is that they are so effective for certain classes of problems, and some extensions to other fields like machine learning.