CS 330: Deep Multi-Task and Meta Learning

Fall 2020, Class: Mon, Wed 1:00-2:20pm


While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:

  • goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
  • meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
  • curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer

This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.


The course will include live lectures over zoom, three homework assignments, a fourth optional homework assignment, and a course project. The lectures will discuss the fundamentals of topics required for understanding and designing multi-task and meta-learning algorithms in both supervised learning and reinforcement learning domains. The assignments will focus on coding problems that emphasize these fundamentals. Finally, students will present a short spotlight of their project proposal and, at the end of the quarter, their completed projects.


CS 229 or an equivalent introductory machine learning course is required. CS 221 or an equivalent introductory artificial intelligence course is recommended but not required.

Lecture Videos:

If you are looking for publicly-available lecture videos from the Fall 2019 offering, they are here. Other materials from the Fall 2019 offering are here. Lecture videos from this Fall 2020 offering will be processed and made publicly available after the course. For students enrolled in the course, recorded lecture videos will be posted to canvas after each lecture.


Chelsea Finn

Prof. Chelsea Finn

OH: Mon 2:30-3:30 pm
Karol Hausman

Dr. Karol Hausman

Rafael Rafailov

Rafael Rafailov

Teaching Assistant
OH: Sun 1-2:30 pm
Dilip Arumugam

Dilip Arumugam

Teaching Assistant
OH: Fri 10-11:30 am
Mason Swofford

Mason Swofford

Teaching Assistant
OH: Thur 12-1:30 pm
Albert Tung

Albert Tung

Teaching Assistant
OH: Tue 4-5:30 pm
Karen Yang

Karen Yang

Teaching Assistant
OH: Wed 4:30-6 pm
Nikita Demir

Nikita Demir

Teaching Assistant
OH: Mon 6.30-8 pm
Suraj Nair

Suraj Nair

Teaching Assistant
OH: Thur 7-8:30 pm


Date Lecture Deadlines Optional reading
Week 1
Mon, Sep 14
Lecture Course introduction
Week 1
Wed, Sep 16
Lecture Supervised multi-task learning, transfer learning
Week 1
Thu, Sep 17
TA Session TensorFlow tutorial
Week 2
Mon, Sep 21
Lecture Meta-learning problem statement, black-box meta-learning Homework 1 out [PDF][Colab Notebook]
Week 2
Wed, Sep 23
Lecture Optimization-based meta-learning
Week 3
Mon, Sep 28
Guest Lecture Automatic differentiation (Matthew Johnson, Google Brain) [Class Colab][Additional Colab]
Week 3
Wed, Sep 30
Lecture Few-shot learning via metric learning Due Homework 1
Week 4
Mon, Oct 5
Lecture Advanced meta-learning topics Homework 2 out [PDF][Colab Notebook]
Week 4
Wed, Oct 7
Leacture Bayesian meta-learning
Week 5
Mon, Oct 12
Lecture Renforcement learning primer, multi-task RL, goal-conditioned RL (Karol Hausman)
Week 5
Wed, Oct 14
Presentations Project Proposal Spotlight Presentations Due Project proposal
Week 5
Fri, Oct 16
Due Homework 2
Homework 3 out
[PDF][Colab Notebook]
Week 6
Mon, Oct 19
Lecture Model-based RL for multi-task learning
Week 6
Wed, Oct 21
Lecture Meta-RL: Adaptable models and policies
Week 7
Mon, Oct 26
Lecture Meta-RL: Learning to explore Due Homework 3
Optional Homework 4 out
[PDF][Colab Notebook]
Week 7
Wed, Oct 28
Lecture A graphical model perspective on multi-task and meta-RL (Karol Hausman)
Week 7
Thu, Oct 29
TA Session Pytorch tutorial
Week 8
Mon, Nov 2
Lecture Hierarchical RL and skill discovery (Karol Hausman) Due Project milestone
Week 8
Wed, Nov 4
Lecture Lifelong learning: problem statements, forward & backward transfer (Karol Hausman)
Week 9
Mon, Nov 9
Guest Lecture Meta-learning & cognitive science (Jane Wang, DeepMind) Due Optional Homework 4
Lecture is at 9 am PST
Week 9
Wed, Nov 11
Lecture Frontiers and open problems
Week 10
Mon, Nov 16
Presentations Final project presentations Due Final presentations slides
Week 10
Wed, Nov 18
Presentations Final project presentations
Week 10
Fri, Nov 20
Due Final project report

Grading and Course Policies

Homeworks (15% each): There are three homework assignments, each worth 15% of the grade. Assignments will require training neural networks in TensorFlow in a Colab notebook. There is also a fourth homework assignment that will either replace one prior homework grade or part of the project grade (whichever is better for grade). All assignments are due to Gradescope at 11:59 pm Pacific Time on the respective due date.

Project (55%): There's a research-level project of your choice. You may form groups of 1-3 students to complete the project, and you are encouraged to start early! Further guidelines on the project will be posted shortly.

Late Days: You have 6 total late days across homeworks and project-related assignment submissions. You may use a maximum of 2 late days for any single assignment.

Honor Code: You are free to form study groups and discuss homeworks. However, you must write up homeworks and code from scratch independently. When debugging code together, you are only allowed to look at the input-output behavior of each other's programs and not the code itself.

Note on Financial Aid

All students should retain receipts for books and other course-related expenses, as these may be qualified educational expenses for tax purposes. If you are an undergraduate receiving financial aid, you may be eligible for additional financial aid for required books and course materials if these expenses exceed the aid amount in your award letter. For more information, review your award letter or visit the Student Budget website.

All Projects

  • CAML: Catastrophically-Aware Meta-Learning
    Woodrow Wang, Tristan Gosakti, Jonathan Gomes-Selman
  • A Meta Learning Approach to Discerning Causal Graph Structure
    Dominik Damjakob, Justin Wong
  • Embedding Physics in Meta-Learning for a Dynamical System
    Somrita Banerjee
  • Semi-supervised Task Construction for Meta-Learning
    Arkanath Pathak
  • A Meta Learning Approach to Novel Image Captioning
    Eli Pugh
  • Benchmarking Hierarchical Task Sampling for Few-Shot Multi-Label Domain Adaptation
    Trenton Chang, Joshua Chang
  • protANIL: a Fast and Simple Meta-Learning Algorithm
    Alexander Arzhanov
  • Few-Shot Learning for Lesion classification
    Surya Narayanan, Oussama Fadil, Sandra Ha
  • Meta-Regularized Deep Learning for Financial Forecasting
    Will Geoghegan
  • Studying and Improving Extrapolation and Generalization of Non-Parametric and Optimization-Based Meta-Learners
    Axel Gross-Klussmann
  • Meta-Learning for Instance Segmentation on Satellite Imagery
    Andrew Mendez, George Sarmonikas
  • MuML: Musical Meta-Learning
    Omer Gul, Collin Schlager, Graham Todd
  • Meta-Regularization by Enforcing Mutual-Exclusiveness
    Edwin Pan, Pankaj Rajak, Shubham Shrivastava
  • Unsupervised Face Recognition via Meta-Learning
    Zhejian Peng, Dian Huang
  • Meta-Learning for Sequence Models
    Ethan A. Chi, Shikhar Murty, Sidd Karamcheti
  • Multi-Task Training on X-Ray Images
    Henry Wang
  • Clustering Mixed Task Distributions Using Meta-Learners
    Suraj Menon, Wei Da
  • Meta-Learning as a Fast Adaptation Approach for Automated Neonatal Seizure Detection in Electroencephalography
  • Domain-adapted zero-shot text classification using label encoding
  • Meta Learning in Cardiac Magnetic Resonance Imaging
  • Meta Learning an Implicit Function Space
  • Guarantees of Online Meta-Learning for Safe RL
  • Learning to Learn to Live: Meta-Reinforcement Learning for Text Based Games
  • Tackling Memorization in Meta-Learning by Maximizing Mutual Information
  • K-pop Music classification using Black-box Meta-Learning
  • Model-Agnostic Meta-Learning for Multilingual Text-to-Speech
  • Speed-up Meta-Learning with Multivariable Causual Effects
  • Benchmarking Hierarchical Task Sampling for Few-Shot Multi-Label Domain Adaptation
  • Meta-RL for Autonomous Driving
  • Transfer-Based Curricula for Multiple Tasks
  • DREAM-GANs: Learning DREAM Exploration Policy using GANs
  • Navigating Diverse Dynamics with Adaptable Model-Based Reinforcement Learning
  • Maximal Principal Strain of Brain Prediction Models Based on Meta-Learning Approaches
  • Combining Neural Data from Different Sessions: A Meta-Learning Approach
  • Non-Parametric Meta-Learning for Out-of-Distribution Tasks
  • Meta-Learning for Opponent Adaptation in Competitive Games
  • Exploring Cross-Embodiment Imitation with Meta-Learning
  • Studying Extrapolation with Optimization-Based and Non-Parametric Few-Shot Learners
  • CAML: Cathastrophically-Aware Meta-Learning
  • Off-Policy Meta-Learning Shared Hierarchies
  • Semi-Supervised Learning: Extensions of MAML and ProtoNet
  • Few-Shot Land Cover Semantic Segmentation
  • One Shot Logo Detection Using Multiscale Conditioning
  • Sequential Few-Shot Learning
  • Extending Unsupervised Meta-Learning with Latent Space Interpolation in GANs to Semi-Supervised Meta-Learning
  • Meta-Learning with Autonomous Sub-Class Inference
  • Gradient Surgery for Meta-Learning
  • Diversity-Sensitive Regularization for Meta-Learning
  • Meta-Learning for Spatio-Temporal Poverty Prediction from Text
  • Semi-Supervised Meta-Learning in NLP
  • Semi-Supervised Task Construction for Meta-Training
  • Predicting Model Confidence in Multi-Task Setting
  • Diversity-Sensitive Meta-Learning
  • Learning Morphology-Robust Policies Using Deep Meta-Reinforcement Learning
  • Studying and Improving Extrapolation and Generalization of Non-Parametric and Optimization Based Meta-Learners
  • Meta-Learning Algorithms for Multi-Class Text Classification
  • Forecasting Object Pushing with the Meta-Extended Kalman Filter
  • Goal-Conditioned Learning Using Linked Episodic Memory
  • Weakness is Power: Weak Supervision for Meta-Learning Few-Shot Classification
  • Applying MAML and LEO to the MIT Incidents Dataset
  • Few-Shot Meta-Denoiser for Monte Carlo Path Traced Images
  • Meta-Learning as a Fast Adaptation Approach for Automated Neonatal Seizure Detection in Electroencephalography
  • Semi-Supervised Meta-Learning
  • Effects of Regularizaton on Convex Last Layer Meta-Learning
  • Continual Proto-MAML
  • MuML: Musical Meta-Learning
  • Goal Image Conditioned RL with Multiple Goals
  • Semi-Supervised Approaches to Meta-Learning
  • Multiscale Learning For Multiplexed Image Classification
  • TAML: Task-Agnostic Meta Learning For Medical Imaging
  • Few-Shot Time-Series Forecasting with Known Information Using Black-Box Optimization
  • On Learning Domain-Invariant Representations for Unsupervised Domain Adaptation Under a Multitask Representation Learning Setting
  • Meta Summarizer
  • Meta Learning for Rare Art Restoration
  • Meta Learning to Augment
  • Semi-Supervised Meta Learning for Few-Shot Text Classification
  • Domain-Adapted Zero-Shot Classification Using Label Encoding
  • Evolution of MAMLs: Enhancing Gradient Based Meta-Learning
  • Exploring Intra-Task Relations to Improve Meta-Learning Algorithms
  • MoDUs: Robust Unsupervised Meta Learning with Gaussian Mixture Models
  • Training Web-Based Agents Via Multi-Task Reinforcement Learning
  • RL-BERT: Automatically Optimized Compressed BERT Using Reinforcement-Based Distillation Agent
  • Posterior Goal sampling for Hierarchical Reinforcement Learning
  • Few-Shot Learning for Lesion Classification
  • Meta Learning for Local Ancestry Inference
  • Pre-Training a Hierarchical RL Controller for Diverse Skill Control
  • The Memorization Problem: Controlling Information Flow in Supervised Meta-Learning Using Dropout
  • Meta-Learning with Knowledge Graph for Question Answering
  • Learning the Channel Within: Meta-Learning Polar-Coded MIMO Signals
  • A Meta Learning Approach to Novel Image Captioning
  • Weakness Recognition for Black-Box Systems
  • Maximizing Mutual Information to Address the Memorization Issue
  • Softer Parameter Sharing: Adversarial Regularization for Representation Learning in Machine Translation
  • Bayesian Meta-Learning Through Variational Gaussian Processes
  • Unsupervised Meta Learning for One Shot Title Compression in Voice Commerce
  • Meta Learning for User Cold-Start Recommendation with Rich Features
  • Meta-Regularization by Enforcing Mutual-Exclusiveness
  • MetaRL as Cooperative Multi-Agent RL with Loosely Decoupled Exploration
  • Reducing Training Data Requiements for Video Human Action Recognition
  • Leveraging a Self-Supervised Distance Function to Mitigate Over-Specification Problems in Goal-Conditioned RL
  • Multi-Task Learnng for Natural Language Understanding with Active Task Prioritization Using Task-Specific Model Uncertainty
  • A Semi-Supervised Approach to Adversarial Meta-Learning
  • Multi-Task Reinforcement Learning Without Reward Engineering
  • Generalized Image ML Denoiser via Focused Meta-Learning
  • BIGRL: Background Invariant Goal-Conditioned RL
  • Inferring Weak Gravitational Lensing from Galaxy Catalogs with Bayesian Graph Convolutional Meta Learners
  • Meta-Learning Neural Implicit Representations for Image Representation
  • Embedding Physics in Meta Learning for a Dynamical System: Analysis of a Cart-Pole System
  • Memory-Efficient Optimization-Based Meta-Learning
  • Encoding Hierarchy for Multi-Task Language Understanding
  • Meta-Learning for Spectrum Generalization in Modeling Electromagnetic Simulations
  • Dynamically Programmed Prototypical Networks as a Strong Baseline for Federated Reconnaissance
  • Combining Model-Based and Model-Free Meta Reinforcement Learning
  • MoBIL: Model Based Meta-Imitation Learning

    © Chelsea Finn 2020