Simons Foundation

The Next Great Scientific Theory is Hiding Inside a Neural Network

America/New_York
Gerald D. Fischbach Auditorium/2-GDFA (160 5th Avenue)

Gerald D. Fischbach Auditorium/2-GDFA

160 5th Avenue

220
Description

The Next Great Scientific Theory is Hiding Inside a Neural Network

Contact: plund@simonsfoundation.orglectures@simonsfoundation.org

Registration link: https://www.eventbrite.com/e/the-next-great-scientific-theory-is-hiding-inside-a-neural-network-tickets-868355955037

Machine learning methods such as neural networks are quickly finding uses in everything from text generation to construction cranes. Excitingly, those same tools also promise a new paradigm for scientific discovery.

In this Presidential Lecture, Miles Cranmer will outline an innovative approach that leverages neural networks in the scientific process. Rather than directly modeling data, the approach interprets neural networks trained using the data. Through training, the neural networks can capture the physics underlying the system being studied. By extracting what the neural networks have learned, scientists can improve their theories. He will also discuss the Polymathic AI initiative, a collaboration between researchers at the Flatiron Institute and scientists around the world. Polymathic AI is designed to spur scientific discovery using similar technology to that powering ChatGPT. Using Polymathic AI, scientists will be able to model a broad range of physical systems across different scales.

About the Speaker:
Cranmer is an assistant professor in data intensive science at the University of Cambridge with joint appointments in the Department of Applied Mathematics and Theoretical Physics and the Institute of Astronomy. He completed his Ph.D. at Princeton University. His research focuses on accelerating scientific discovery by developing and applying novel methods at the intersection of machine learning and physics. Cranmer has created a suite of standard software libraries for ‘symbolic regression’ that have been utilized in numerous scientific discoveries. His work covers various areas of deep learning, including physics-motivated architectures such as Lagrangian neural networks.

SCHEDULE
Doors open: 5:30 p.m. (No entrance before 5:30 p.m.)
Lecture: 6:00 p.m.–7:00 p.m. (Admittance closes at 6:20 p.m.)
Inquiries: lectures@simonsfoundation.org