This comprehensive course offers a deep dive into the evolving field of self-supervised learning, a rapidly growing area in artificial intelligence and machine learning. This course is designed to provide students with an understanding of the principles, algorithms, and applications of self-supervised learning. SSL is a type of machine learning where the system learns to make sense of part of the input data using other parts of the input data. This stands in contrast to supervised learning, which relies on manually annotated data.

Over the course of the semester, we will explore key concepts and methods used in self-supervised learning. Topics include contrastive learning, clustering, representation learning, data augmentation strategies, and techniques for learning from unlabelled or partially labeled data. The course will start with a review of basic concepts in machine learning and probability theory, followed by an introduction to self-supervised learning and its advantages over traditional methods. We will then delve into various self-supervised learning methods such as autoregressive models, self-supervised neural networks, and transformer-based models.

Students will also learn about practical applications of self-supervised learning, including but not limited to computer vision, natural language processing, and robotics. The course will emphasize both theory and practice, with assignments providing hands-on experience with implementing self-supervised learning algorithms.

 

Prerequisites: Knowledge of basic probability, linear algebra, calculus, and familiarity with a programming language, preferably Python. Prior experience with basic concepts in machine learning is also recommended.