Title: Distributed and Parallel Sparse Linear Algebra and Learning
Exascale machines are now available, based on several different arithmetic (from 64-bit to 16-32 bit arithmetics, including mixed versions and some that are no longer IEEE compliant) and using different architectures. Brain-scale applications manipulate huge graphs or irregular meshes that lead to very sparse nonsymmetrical linear algebra problems. Moreover, some new processors have networks on chip to interconnected some subsets or cores which don’t share memories anymore, associated to distributed vector facilities: then distributed and parallel computing are now often required at the processor level and new programming paradigms would be proposed.
In this talk, after a short description of these recent evolutions having important impacts on our results, in particular about parallel and distributed iterative methods, we present some results mainly obtained on the still #1 supercomputer of the HPCG list. We study sparse matrix
computation and applications on problems which are representative examples of the strong interactions between machine learning and linear algebra: the PageRank method and dimensionality reduction. We show how to take advantage of these interactions and commonalities to propose new approaches to problem solving in either domain. An innovative
machine learning approach based on Unite and Conquer methods, that we introduced to solve some linear algebra problems, will be presented. Experimental results, demonstrating the interest of the approach for efficient data analysis in the case of applications such as clustering, anomaly detection, etc. will be presented.
We conclude proposing some research perspectives and potential collaborations.
If you would like to attend, please email crampersad@flatironinstitute.org for the Zoom details.