Course Information

About

As an exciting new field, deep unsupervised learning has gradually emerged as a promising alternative to supervised approaches to representation learning -- with some practical and theoretical considerations. To begin with, unsupervised data is much cheaper to obtain, but more importantly, as humans, we don't need millions of labeled data to learn.

This class will provide an in-depth and comprehensive overview of the fundamental concepts and recent advances in the field of deep unsupervised learning. The first part of the course focuses on deep generative models such as autoregressive models, normalizing flow models, variational autoencoders, generative adversarial networks and their extensions with discrete latent variables. The second part covers self-supervised learning, including pretraining of large language models. The class is mostly modeled after the Deep Unsupervised Learning course at Berkeley. The class is taught by Aykut Erdem. The teaching assistant is Emre Can Açıkgöz.

                                  

Instruction style: During the semester, students are responsible for studying and keeping up with the course material outside of class time. These may involve reading particular book chapters, papers or blogs and watching some video lectures.

Time and Location

Lectures: Monday and Wednesday at 16:00-17:10 (SNA 104)

PS Hour: Tuesday at 17:30-18:40 (SNA A44)

Office Hours: Mondays 17:30-18:30 (Aykut)

Communication

The course webpage will be updated regularly throughout the semester with lecture notes, presentations, assignments and important deadlines. All other course related communications will be carried out through Blackboard.

Prerequisites

COMP547 is intended for graduate students enrolled in Computer Science MS and PhD programs. Senior undergraduate students and all non-COMP graduate students need the instructor's permission to register for the class. The prerequisites for this course are:

  • Programming (you should be a proficient programmer to work out the assignments and to implement your course project.)
  • Calculus (differentiation, chain rule) and Linear Algebra (vectors, matrices, eigenvalues/vectors) (MATH107)
  • Basic Probability and Statistics (random variables, expectations, multivariate Gaussians, Bayes rule, conditional probabilities) (ENGR200)
  • Machine Learning or Deep Learning (you can still survive this course without a machine learning course before, but it is highly recommended (ENGR421, COMP541)
  • Optimization (cost functions, taking gradients, regularization)

Course Requirements and Grading

Grading will be based on

  • Assignments (30%) (3 assignments x 10% each)
  • Midterm Exam (10%)
  • Course Project (presentations and reports) (36%),
  • Paper Presentations (14%),
  • Paper Reviews (3.5%),
  • Class participation (6.5%),

Reference Books

  • Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016 (draft available online)

Schedule

<
Week Topic Assignments
Feb 12-14 Introduction to the course (Survey)
Neural Building Blocks I: Spatial Processing with CNNs
Feb 19-21 Neural Building Blocks II: Sequential Processing with RNNs
Neural Building Blocks III: Attention and Transformers
Feb 26-28 Autoregressive Models Assg 1 out
Mar 4-6 Normalizing Flow Models
Mar 11-13 Variational Autoencoders Assg 1 due
Mar 18-20 Generative Adversarial Networks Project proposal due, Assg 2 out
Mar 25-27 Denoising Diffusion Models
Apr 3 Strengths and Weaknesses of Current Generative Models
Apr 8-10 No classes - Ramadan Holiday
Apr 15-17 No classes - Spring Break
Apr 22-24 Self-Supervised Learning Assg 2 due, Assg 3 out
Apr 29 Project Progress Presentations Project progress reports due
May 6-8 Pre-training Language Models Midterm Exam
May 13-15 Multimodal Pre-training Assg 3 due
May 20-22 Final Project Presentations Final project reports due
Detailed Syllabus and Lectures