In this lecture, I will give an introduction to the field of continuous optimization. I will emphasize instances of optimization problems that appear in biology and physics through the concept of optimization landscapes. I will review sampling-based approaches as well as gradient-based methods and focus on concepts rather than derivations of specific algorithms. The lecture is intended to set...
A 1-hour version of the MLSS Buenos Aires tutorial notes, focusing on parts 1 and 4, available at the links below.
I overview key concepts and practical methods for efficient and accurate numerical function approximation, integration and differentiation. This is the basis for spectral and other ODE/PDE solvers coming up in the next talk. I will teach concepts such as convergence rate, local/global, adaptivity, rounding error, polynomial and Fourier bases. The focus is on 1D, with pointers to...
We overview various numerical methods to solve ODEs and PDEs.
For source of notes see: https://github.com/ahbarnett/fwam-numpde
In this short tutorial, I will review variational inference (VI), a method to approximate posterior probability distributions through optimization. VI became popular as it provides faster convergence than more traditional sampling methods.
This tutorial aims to provide both an introduction and an overview of recent developments. First, I will provide a review of variational inference....
An introduction to Bayesian hierarchical modeling, with an example from my own research modeling repeated velocity measurements of distant stars in the Milky Way.
A Deep Learning 101 to get familiar with machine/deep learning principles, neural networks, back-propagation, convolution nets and representation learning.
We will introduce state of the art deep learning methods and showcase some of its applications to astrophysical challenges.
Although traditional artificial neural networks were inspired by the brain they resemble biological neural networks only superficially. Successful machine learning algorithms like backpropagation violate fundamental biophysical observations suggesting that the brain employs other algorithms to analyze high-dimensional datasets streamed by our sensory organs. We have been developing...
The goal of this talk is to show how probabilistic methods can be used to accelerate standard matrix factorizations (e.g. SVD) with provable characteristics in terms of speed and accuracy.
I will focus on clustering data points in low dimensions (mostly 2d) and provide an overview of some popular clustering algorithms.
The accompanying live notebook is linked from my homepage: https://users.flatironinstitute.org/~magland
In this talk, I will discuss what hierarchically structured matrices are, where they occur in practice, and present algorithms for factorizing these structured matrices. I will demonstrate how the factorization enables subsequent matrix operations (applying the matrix, computing its inverse, and its determinant) in linear CPU time.
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. I first illustrate this property of NMF on some applications. Then I address the problem of solving NMF, which is NP-hard in general, and review some standard NMF algorithms. Finally, I...
Tensor network methods are a family of variational algorithms used to simulate many body quantum systems in a variety of situations. With some brief motivation from physics, I'll explain why anyone would want to use these methods, why it is that they are so effective for certain classes of problems, and some extensions to other fields like machine learning.