Guest Speaker: Ravid Ziv (NYU)
Title: Information theory and self-supervised learning
Abstract: In recent years, information theory has become an important tool for optimizing and understanding deep neural networks. Many of these works are based on the "information bottleneck principle" - the idea of compressing the data, while achieving high performance. However, when we go beyond the supervised setting, specifically to self-supervised learning, it becomes unclear which representation to compress. I will discuss recent attempts to analyze self-supervised learning from an information-theoretic perspective, their problems, and present a unified framework that tries to synthesize these methods into a single coherent framework. Using this framework, I will review recent SSL methods and point out the research opportunities and challenges still to be addressed.
Papers for review:
An Information-Theoretic Framework for Multi-view Learning
Learning deep representations by mutual information estimation and maximization
Self-supervised Learning from a Multi-view Perspective