UVA Deep Learning Course


MSc in Artificial Intelligence for the University of Amsterdam.

Find Out More

About


Deep learning is primarily a study of multi-layered neural networks, spanning over a great range of model architectures. This course is taught in the MSc program in Artificial Intelligence of the University of Amsterdam. In this course we study the theory of deep learning, namely of modern, multi-layered neural networks trained on big data. The course is taught by Assistant Professor Pascal Mettes with Head Teaching Assistants Melika Davood Zadeh, Mohammadreza Salehidehnavi and Danilo de Goede. The teaching assistants are Matey Krastev, Nesta Midavaine, Konrad Szewczyk, Luan Fletcher, Max van Spengler, Wenzhe Yin, Samuele Papa, Marina Orozco González, Gowreesh Mago, Swasti Mishra, Antoniοs Tragoudaras, Floris Six Dijkstra, Ruthu Hulikal Rooparaghunath, Ana Manzano Rodriguez



Melika Davood Zadeh Mohammadreza Salehidehnavi Danilo de Goede

Matey Krastev Nesta Midavaine Konrad Szewczyk Luan Fletcher Max van Spengler

Wenzhe Yin Samuele Papa Marina Orozco González Gowreesh Mago Swasti Mishra

Antoniοs Tragoudaras Floris Six Dijkstra Ruthu Hulikal Rooparaghunath Ana Manzano Rodriguez

Lectures


Week 1

This lecture introduces the structure of the Deep Learning course, and gives a short overview of the history and motivation of Deep Learning.

Documents:

No documents.

Lecture recordings:

No recordings.

This tutorial introduces the practical sessions, the TA organizer team, etc. Afterwards, we will discuss the PyTorch machine learning framework, and introduce you to the basic concepts of Tensors, computation graphs and GPU computation.

Documents:

This lecture covers the basics of forward and backward propagation in neural networks.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 2

This lecture covers the first part of deep learning optimization techniques.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will discuss the role of activation functions in a neural network, and take a closer look at the optimization issues a poorly designed activation function can have.

Documents:

This lecture covers the second part of deep learning optimization techniques.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 3

This lecture covers the first part of Convolutional Neural Networks.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will discuss the importance of proper parameter initialization in deep neural networks, and how we can find a suitable one for our network. In addition, we will review the optimizers SGD and Adam, and compare them on complex loss surfaces.

Documents:

This lecture covers the second part of Convolutional Neural Networks.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 4

This lecture covers attention mechanisms in deep learning.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will implement three popular, modern ConvNet architectures: GoogleNet, ResNet, and DenseNet. We will compare them on the CIFAR10 dataset, and discuss the advantages that made them popular and successful across many tasks.

Documents:

This lecture covers Graph Neural Networks.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 5

This lecture covers self-supervised learning and vision-language learning.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will discuss the relatively new breakthrough architecture: Transformers. We will start from the basics of attention and multi-head attention, and build our own Transformer.

Documents:

This lecture covers auto-encoding and generation techniques in deep learning.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 6

This lecture covers various unusual aspects and phenomena in deep learning.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will discuss the implementation of Graph Neural Networks. In the first part of the tutorial, we will implement the GCN and GAT layer ourselves. In the second part, we use PyTorch Geometric to look at node-level, edge-level and graph-level tasks.

Documents:

This lecture covers deep learning techniques for non-Euclidean data.

Documents:

No documents.

Lecture recordings:

No recordings.

Week 7

This lecture covers deep learning techniques for video processing.

Documents:

No documents.

Lecture recordings:

No recordings.

In this tutorial, we will explore self-supervised contrastive learning using the SimCLR framework.

Documents:

Question and Answer session for the course.

Documents:

No documents.

Lecture recordings:

No recordings.

If you are interested in older versions of the lectures, you can find them below.

UVADLC Apr 2019 UVADLC Nov 2019 UVADLC Nov 2020 UVADLC Nov 2021 UVADLC Nov 2022 UVADLC Nov 2023

Contact us!


If you have any questions or recommendations for the website or the course, you can always drop us a line! The knowledge should be free, so feel also free to use any of the material provided here (but please be so kind to cite us). In case you are a course instuctor and you want the solutions, please send us an email.