Section outline
-
The module presents the fundamental concepts, challenges, architectures and methodologies of deep learning. We introduce the learning of neural representations from vectorial, sequential and image data, covering both supervised and unsupervised learning, and hinting at various forms of weak supervision. Models covered include: deep autoencoders, convolutional neural networks, long-short term memory, gated recurrent units, advanced recurrent architectures, sequence-to-sequence, neural attention, Transformers, neural Turing machines. Methodological lectures will be complemented by introductory seminars to Keras-TF and Pytorch.
Date Topic References (NEW) Additional Material 20 03/04/2025 Deep Autoencoders
Sparse, denoising and contractive AE; deep RBM[SD] Coverage of the Prince book on this lecture is inadequate but you can use the lecture slides and complement with the additional material if necessary. Additional Readings
[15] DBN: the paper that started deep learning
[16] Deep Boltzmann machines paper
[17] Review paper on deep generative models
[18] Long review paper on autoencoders from the perspective of representation learning
[19] Paper discussing regularized autoencoder as approximations of likelihood gradient21 08/04/2025
(11-13)Convolutional Neural Networks I
Introduction to the deep learning module; introduction to CNN; basic CNN elements[SD] Chapter 10 22 09/04/2025
(16-18)Convolutional Neural Networks II
CNN architectures for image recognition; convolution visualization; advanced topics (deconvolution, dense nets); applications and code
[SD] Chapter 1023 10/04/2025
(14-16)Gated Recurrent Networks I
Deep learning for sequence processing; gradient issues;24 11/04/2025
(14-16)
ROOM D3Gated Recurrent Networks II
long-short term memory; gated recurrent units; generative use of RNN
RECOVERY LECTURE