Title: A greedy and local perspective on neural network training and parallelization
Abstract: In this talk, Michael will show that meaningful object recognition accuracy can be achieved without end-to-end backpropagation, using only progressive linearization of object category as training objective. The approach used - greedy and local optimization - leads to an immediate application in parallelizing the training of deep networks.
Will present both of these ideas, based on the papers https://arxiv.org/abs/1812.11446 https://arxiv.org/abs/1901.08164 https://arxiv.org/abs/2106.06401 as well as other recent developments in local parallel training.
If you would like to attend, please email crampersad@flatironinstitute.org for the Zoom link.