Presenter: Stefano Zampini (King Abdullah University of Science and Technology)
Title: On the use of "conventional" unconstrained minimization solvers for training regression problems in Scientific Machine Learning
Abstract: In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational
science and engineering applications. At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods. However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for
unconstrained optimization.
In this talk, we empirically demonstrate the superior efficacy of a trust re-gion method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from
regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional solvers tested, including L-BFGS and
inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.