Section outline

  • Weekly Schedule

    The course is held on the second term. The schedule for A.A. 2025/26 is provided in table below.

    The first lecture of the course will be ON FEBRUARY 18th 2026 h. 16.00. The course will be in person, with lecture videos being recorded and made available a-posteriori to course students (with no guarantee of quality nor completeness).

    Day Time
    Tuesday 11.15-13.00 (Room C1 - Polo Fibonacci)
    Wednesday 16.15-18.00 (Room C1 - Polo Fibonacci)
    Thursday 11.15-13.00 (Room C1 - Polo Fibonacci)

     

    Course Prerequisites

    Course prerequisites include knowledge of machine learning fundamentals ("Machine Learning" course); knowledge of elements of probability and statistics, calculus and optimization algorithms ("Computational mathematics for learning and data analysis" course).  Previous programming experience with Python is a plus for the practical lectures.

    Course Overview

    The course introduces students to the analysis and design of deep and generative learning models and discusses how to realize advanced applications exploiting advanced machine learning techniques. Particular focus will be given to methodological aspects and foundational knowledge of modern neural networks and machine learning. The course is targeted at students who are pursuing specializations in Artificial Intelligence and Machine Learning, but it is of interest for mathematicians, physicists, data scientist and information retrieval specialists, roboticists and those with a bioinformatics curricula.

    The course is articulated in five parts. The first part introduces  basic concepts and foundations on probabilistic models and causality, followed by a module dealing with formalization of learning in the probalistic paradigm, models and methods leveraging approximated inference in learning.  The third and fourth parts will delve into deep learning and deep generative learning models, respectively.   The final part of the course will present selected recent works, advanced models and applications in modern machine learning.

    Presentation of the theoretical models and associated algorithms will be complemented by introductory classes on the most popular software libraries used to implement them.

    The official language of the course is English: all materials, references and books are in English. Lecture slides will be made available here, together with suggested readings.

    Topics covered - graphical models (Bayesian networks, Markov Random Fields), causality, Expectation Maximization, approximated inference and learning (variational, sampling), Bayesian models, fundamentals of deep learning (CNN, gated recurrent networks, attention, transformers), propagation issues in neural networks, generative deep learning (autoencoders, VAE, GANs, diffusion models, normalizing flows, score-based models), deep graph networks, principles of reinforcement learning and deep reinforcement learning, ML and deep learning libraries.

    Textbooks and Teaching Materials

    Much of the course content will be available through lecture slides and associated bibliographic references. Slides will be integrated by course notes.

    We will use two main textbooks, one covering foundational knowledge on probabilistic models and the other more oriented towards deep learning models.

    Note that all books have an electronic version freely available online.

    [BRML] David Barber, Bayesian Reasoning and Machine Learning, Cambridge University Press (PDF)

    [SD] Simon J.D. Prince, Understanding Deep Learning, MIT Press (2023) (book online and additional materials)