BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:FI Computational Methods and Data Science Journal Club: Larry Saul
(CCM)
DTSTART:20221213T200000Z
DTEND:20221213T220000Z
DTSTAMP:20240619T204400Z
UID:indico-event-3442@indico.flatironinstitute.org
CONTACT:ccaadmin@flatironinstitute.org
DESCRIPTION:Rescheduled from November 8thFI Computational Methods and Data
Science Journal ClubFlatiron Institute\, 162 5th AvenueSpeaker: Larry Sa
ul (CCM)Title: A geometrical connection between sparse and low-rank matric
es and its uses for machine learningAbstract: Many problems in high dimens
ional data analysis can be formulated as a search for structure in large m
atrices. One important type of structure is sparsity\; for example\, when
a matrix is sparse\, with a large number of zero elements\, it can be stor
ed in a highly compressed format. Another type of structure is linear depe
ndence\; when a matrix is low-rank\, it can be expressed as the product of
two smaller matrices. It is well known that neither one of these structur
es implies the other. But can one find more subtle connections by looking
beyond the canonical decompositions of linear algebra?In this talk\, I wil
l consider when a sparse nonnegative matrix can be recovered from a real-v
alued matrix of significantly lower rank. Of particular interest is the se
tting where the positive elements of the sparse matrix encode the similari
ties of nearby points on a low dimensional manifold. The recovery can then
be posed as a problem in manifold learning—namely\, how to learn a simi
larity-preserving mapping of high dimensional inputs into a lower dimensio
nal space. I will describe an algorithm for this problem based on a genera
lized low-rank decomposition of sparse matrices. This decomposition has th
e interesting property that it can be encoded by a neural network with one
layer of rectified linear units\; since the algorithm discovers this enco
ding\, it can also be viewed as a layerwise primitive for deep learning. F
inally\, I will apply the algorithm to data sets where vector magnitudes a
nd small cosine distances have interpretable meanings (e.g.\, the brightne
ss of an image\, the similarity to other words). On these data sets\, the
algorithm is able to discover much lower dimensional representations that
preserve these meanings.Bio: Lawrence Saul is a Senior Research Scientist
in the Center for Computational Mathematics (CCM) at the Flatiron Institut
e. He joined CCM in July 2022 as a group leader in machine learning\; prev
iously\, he was a Professor and Vice Chair in the Department of Computer S
cience and Engineering at UC San Diego. Attendee Instructions:FI employee
s are welcome to attend in person. Please email ccaadmin@flatironinstitute
.org for the Zoom link if you wish to attend remotely.Visitors (w/out an
FI badge) please email ccaadmin@flatironinstitute.org 24hrs in advance to
be registered to the building or to obtain Zoom information.Additional Inf
ormation:COVID Policy: By making entry to our buildings all staff\, vendor
s and guests will implicitly attest to being symptom/COVID free. Vaccinati
on status will no longer be validated as a condition of entry. However\, a
ll staff and affiliates are strongly encouraged to remain up to date with
their vaccination boosters\, according to their individual eligibility.Age
Restriction: All employees\, visitors\, event attendees and vendors are r
equired to be above the age of eighteen for entry into our building(s). Ph
oto ID with birthdate will be required by security upon arrival to our bui
lding. For nursing mothers\, please reach out to an admin to arrange an ex
ception.\n\nhttps://indico.flatironinstitute.org/event/3442/
LOCATION:5th Floor Classroom/5-Flatiron Institute (162 5th Avenue)
URL:https://indico.flatironinstitute.org/event/3442/
END:VEVENT
END:VCALENDAR