Section outline
-
Code: 0075A, Credits (ECTS): 9, Semester: 2, Official Language: English
Instructor: Davide Bacciu - Co-Instructor: Riccardo Massidda
Contact: Instructor's email at UNIPI
Office: Room 331O, Dipartimento di Informatica, Largo B. Pontecorvo 3, Pisa
Office Hours: (email to arrange meeting)
-
Weekly Schedule
The course is held on the second term. The schedule for A.A. 2025/26 is provided in table below.
The first lecture of the course will be ON FEBRUARY 18th 2026 h. 16.00. The course will be in person, with lecture videos being recorded and made available a-posteriori to course students (with no guarantee of quality nor completeness).
Day Time Tuesday 11.15-13.00 (Room C1 - Polo Fibonacci) Wednesday 16.15-18.00 (Room C1 - Polo Fibonacci) Thursday 11.15-13.00 (Room C1 - Polo Fibonacci) Course Prerequisites
Course prerequisites include knowledge of machine learning fundamentals ("Machine Learning" course); knowledge of elements of probability and statistics, calculus and optimization algorithms ("Computational mathematics for learning and data analysis" course). Previous programming experience with Python is a plus for the practical lectures.
Course Overview
The course introduces students to the analysis and design of deep and generative learning models and discusses how to realize advanced applications exploiting advanced machine learning techniques. Particular focus will be given to methodological aspects and foundational knowledge of modern neural networks and machine learning. The course is targeted at students who are pursuing specializations in Artificial Intelligence and Machine Learning, but it is of interest for mathematicians, physicists, data scientist and information retrieval specialists, roboticists and those with a bioinformatics curricula.
The course is articulated in five parts. The first part introduces basic concepts and foundations on probabilistic models and causality, followed by a module dealing with formalization of learning in the probalistic paradigm, models and methods leveraging approximated inference in learning. The third and fourth parts will delve into deep learning and deep generative learning models, respectively. The final part of the course will present selected recent works, advanced models and applications in modern machine learning.
Presentation of the theoretical models and associated algorithms will be complemented by introductory classes on the most popular software libraries used to implement them.
The official language of the course is English: all materials, references and books are in English. Lecture slides will be made available here, together with suggested readings.
Topics covered - graphical models (Bayesian networks, Markov Random Fields), causality, Expectation Maximization, approximated inference and learning (variational, sampling), Bayesian models, fundamentals of deep learning (CNN, gated recurrent networks, attention, transformers), propagation issues in neural networks, generative deep learning (autoencoders, VAE, GANs, diffusion models, normalizing flows, score-based models), deep graph networks, principles of reinforcement learning and deep reinforcement learning, ML and deep learning libraries.
Textbooks and Teaching Materials
Much of the course content will be available through lecture slides and associated bibliographic references. Slides will be integrated by course notes.
We will use two main textbooks, one covering foundational knowledge on probabilistic models and the other more oriented towards deep learning models.
Note that all books have an electronic version freely available online.
[BRML] David Barber, Bayesian Reasoning and Machine Learning, Cambridge University Press (PDF)
[SD] Simon J.D. Prince, Understanding Deep Learning, MIT Press (2023) (book online and additional materials)
-
Introduction to the course philosophy, its learning goals and expected outcomes. We will discuss prospectively the overall structure of the course and the interelations between its parts. Exam modalities and schedule are also discussed.
Date Topic References 1 18/02/2026
(14-16)Introduction to the course
Motivations and aim; course housekeeping (exams, timetable, materials); introduction to generative and deep learning. -
The module introduces probabilistic and causal models. We will refresh useful knowledge from probability and statistics, and introduce fundamental concepts for working with probabilistic models, including conditional independence, d-separation, and causality. We will discuss the graphical models formalism to represent probabilistic relationships in directed/undirected models.
Date Topic References 2 19/02/2026
(11-13)Probability and statistics refresher
basic concepts of probability and statistics; random variables and probability distributions; Bayes rule, marginalization, family of distributions and their properties; inference in probabilistic models3 24/02/2026
(11-13)Graphical models: representation
Bayesian networks; representing joint distributions; conditional independence;Lecture by Riccardo Massidda
4 25/02/2026
(16-18)Graphical models: Markov properties
d-separation; Markov properties; faithfulness; Markov modelsLecture by Riccardo Massidda
5 26/02/2026
(11-13)Graphical Causal Models
causation and correlation; causal Bayesian networks; structural causal models; causal Inference
Lecture by Riccardo Massidda
-
Course grading will follow preferentially a modality comprising in-itinere assignments and a final oral exam. In-itinere assignments waive the final project.
Midterms are only available to students regularly following the course: mechanism to control attendance will be in place. Students who don't follow regularly the course can use the traditional exam modality.
Midterm Assignments
Midterms consist of interim coding assignments involving a quick and dirty (but working) implementation of models (e.g. colab notebook) introduced during the lectures (with and without the use of supporting deep learning libraries).
There will be 3 interim midterms, which will have to be developed individually, roughly aligned with the conclusion of the major modules of the course (expect midterms to be scheduled roughly every 4 weeks).
There will also be a final assigment (midterm n. 4) which will consist in a presentation of a recent research paper on topics/models related to the course content. This final assignment will be executed in groups.
Coding midterms will be automatically tested for correctness but not scored. During the final assignment the instructors will ask questions about the paper to determine knowledge of the paper: again no score provided, only pass/fail.
Oral Exam
The oral examination will test knowledge of the course contents (models, algorithms and applications).
Exam Grading (with Midterms)
The final exam vote is given by the oral grade. The midterms only wave the final project but do not contribute to the grade. In other words you can only fail or pass a midterm. You need to pass all midterms in order to succesfully wave the final project.
Traditional Exam Modality (No Midterms / Non attending students)
Working students, those not attending lectures, those who have failed midterms or simply do not wish to do them, can complete the course by delivering a final project and an oral exam. Final project topics will be released in the final weeks of the course: contact the instructor by mail to arrange choice of the topics once these are published.
The final project concerns preparing a report on a topic relevant to the course content or the realization of a software implementing a non-trivial learning model and/or an AI-based application relevant for the course. The content of the final project will be discussed in front of the instructor and anybody interested during the oral examination. Students are expected to prepare slides for a 15 minutes presentation which should summarize the ideas, models and results in the report. The exposition should demonstrate a solid understanding of the main ideas in the report.
Grade for this exam modality is determined as
\( G = 0.5 \cdot (G_P + G_O) \)
where \( G_P \in [1,30] \) is the project grade and \( G_O \in [1,32] \) is the oral grade