CS 330: Deep Multi-Task and Meta Learning

Fall 2019, Class: Mon, Wed 1:30-2:50pm, Bishop Auditorium


Description:

While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:

  • goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
  • meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
  • curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer

This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.

Format:

The course is a combination of lecture and reading sessions. The lectures will discuss the fundamentals of topics required for understanding and designing multi-task and meta-learning algorithms. During the reading sessions, students will present and discuss recent contributions and applications in this area. There will be three assignments. Throughout the semester, each student will also work on a related research project that they present at the end of the semester.

Prerequisites:

CS 229 or an equivalent introductory machine learning course is required. CS 221 or an equivalent introductory artificial intelligence course is recommended but not required.

Enrollment:

Please fill out this enrollment form if you are interested in this course. See the form for more information on enrollment.


Staff

Chelsea Finn

Prof. Chelsea Finn

Instructor
OH: Weds 3-4 pm
Location: Gates 219
Webpage
Suraj Nair

Suraj Nair

Teaching Assistant
OH: Thurs 7-8 pm
Location: Gates 167
Webpage
Tianhe Yu

Tianhe (Kevin) Yu

Teaching Assistant
OH: Mon 10:30-11:30 am
Location: Gates B21
Webpage
Abhishek Sinha

Abhishek Sinha

Teaching Assistant
OH: Tue 4:30-5:30 pm
Location: Gates 259
Webpage
Tim Liu

Tim Liu

Teaching Assistant
OH: Fri 4:30-5:30 pm
Location: Gates 358
Webpage


Tentative Timeline

Date Lecture Handouts / Deadlines Notes
Week 1
Mon, Sep 23
Lecture Course introduction, problem definitions, applications
Week 1
Wed, Sep 25
Lecture Supervised multi-task learning, black-box meta-learning Homework 1 out HW1 [pdf][zip]
Week 1
Thu, Sep 26
TA Session TensorFlow tutorial TF notebook
Week 2
Mon, Sep 30
Lecture Optimization-based meta-learning
Week 2
Wed, Oct 02
Reading Applications in imitation learning, vision, language, generative models Presentation slides [P1][P2][P3][P4]
Week 3
Mon, Oct 7
Lecture Few-shot learning via metric learning Final Project Guidelines
Week 3
Wed, Oct 09
Reading Hybrid meta-learning approaches Due Homework 1
Homework 2 out
HW2 [pdf][zip]
Presentation slides [P1][P2][P3][P4]
Week 4
Mon, Oct 14
Lecture Bayesian meta-learning
Week 4
Wed, Oct 16
Reading Meta-learning for active learning, weakly-supervised learning, unsupervised learning Presentation slides [P1][P2][P3][P4]
Week 5
Mon, Oct 21
Lecture Renforcement learning primer, multi-task RL, goal-conditioned RL
Week 5
Wed, Oct 23
Reading Auxiliary objectives, state representation learning Due Homework 2
Homework 3 out
HW3 [pdf][zip]
Presentation slides [P1][P2][P3][P4]
Week 6
Mon, Oct 28
Reading Hierarchical RL, curriculum generation Presentation slides [P1][P2][P3][P4]
Week 6
Wed, Oct 30
Guest Lecture Meta-RL, learning to explore Due Project proposal
Kate Rakelly, UC Berkeley
Week 7
Mon, Nov 04
Reading Meta-RL and emergent phenomenon Presentation slides [P1][P2][P3][P4]
Week 7
Wed, Nov 06
Lecture Model-based RL for multi-task learning, meta model-based RL Due Homework 3
Week 8
Mon, Nov 11
Lecture Lifelong learning: problem statement, forward & backward transfer
Week 8
Wed, Nov 13
Reading Miscellaneous multi-task/meta-RL topics Due Project milestone
Week 9
Mon, Nov 18
Guest Lecture TBD Jeff Clune, University of Wyoming / Uber
Week 9
Wed, Nov 20
Guest Lecture Information theoretic exploration Sergey Levine, UC Berkeley
Week 10
Mon, Nov 25
Thanksgiving Break
Week 10
Wed, Nov 27
Thanksgiving Break
Week 11
Mon, Dec 02
Lecture Frontiers: Memorization, unsupervised meta-learning, open problems
Week 11
Tue, Dec 03
Presentation Poster Presentation 1:30 - 3:30 pm @ Packard Atrium
Week 13
Mon, Dec 16
No Class Due Final Project Report (Deadline at 11:59 pm PT)



Note on Financial Aid


All students should retain receipts for books and other course-related expenses, as these may be qualified educational expenses for tax purposes. If you are an undergraduate receiving financial aid, you may be eligible for additional financial aid for required books and course materials if these expenses exceed the aid amount in your award letter. For more information, review your award letter or visit the Student Budget website.




    © Chelsea Finn 2019