Section outline
-
Code: 760AA, Credits (ECTS): 9, Semester: 2, Official Language: English
Instructor: Davide Bacciu
Contact: email - phone 050 2212749
Office: Room 331, Dipartimento di Informatica, Largo B. Pontecorvo 3, Pisa
Office Hours: (email to arrange meeting)
-
Click to access recent news on the course
-
-
The course is held on the second term. The schedule for A.Y. 2024/25 is provided in table below.
The first lecture of the course will be ON FEBRUARY 18th 2025 h. 11.00. The course will be hybrid, both in person and online on the dedicated MS Team.
Recordings of the lectures will be made available to the students following the course.
Day Time Tuesday 11.15-13.00 (Room C1) Wednesday 16.15-18.00 (Room E) Thursday 14.15-16.00 (Room E) Objectives
Course Prerequisites
Course prerequisites include knowledge of machine learning fundamentals (e.g. covered through ML course). Knowledge of elements of probability and statistics, calculus and optimization algorithms are also expected. Previous programming experience with Python is a plus for the practical lectures.
Course Overview
The course introduces students to the analysis and design of advanced machine learning and deep learning models for modern pattern recognition problems and discusses how to realize advanced applications exploiting computational intelligence techniques.
The course is articulated in five parts. The first part introduces basic concepts and algorithms concerning traditional pattern recognition, in particular as pertains sequence and image analysis. The next two parts introduce advanced models from two major learning paradigms, that are deep neural networks and probabilistic models and their use in pattern recognition applications. The fourth part will cover generative deep learning and the intersection between probabilistic and neural models. The final part of the course will present selected recent works, advanced models and applications of learning models.
Presentation of the theoretical models and associated algorithms will be complemented by introductory classes on the most popular software libraries used to implement them.
The course hosts guest seminars by national and international researchers working on the field as well as by companies that are engaged in the development of advanced applications using machine learning models.
The official language of the course is English: all materials, references and books are in English. Lecture slides will be made available here, together with suggested readings.
Topics covered -Bayesian learning, graphical models, learning with sampling and variational approximations, fundamentals of deep learning (CNNs, AE, DBN, GRNs), deep learning for machine vision and signal processing, advanced deep learning models (transformers, foundational models, NTMs), generative deep learning (VAE, GANs, diffusion models, score-based models) deep graph networks, reinforcement learning and deep reinforcement learning, signal processing and time-series analysis, image processing, filters and visual feature detectors, pattern recognition applications (machine vision, bio-informatics, robotics, medical imaging, etc), introduction to programming libraries and frameworks.
Textbooks and Teaching Materials
We will use two main textbooks, one covering the parts about generative and probabilistic models, and the other covering the deep learning modules. Note that all books have an electronic version freely available online.
BOOKS
[BRML] David Barber, Bayesian Reasoning and Machine Learning, Cambridge University Press (PDF)
[SD] Simon J.D. Prince, Understanding Deep Learning, MIT Press (2023) (PDF)
-
Introduction to the course philosophy, its learning goals and expected outcomes. We will discuss prospectively the overall structure of the course and the interelations between its parts. Exam modalities and schedule are also discussed (for both M.Sc. and Ph.D. students).
Date Topic References Additional Material 1 18/02/2025
(11-13)Introduction to the course
Motivations and aim; course housekeeping (exams, timetable, materials); introduction to modern pattern recognition applications -
The module will provide a brief introduction to classical pattern recognition for signal/timeseries and for images. We will cover approaches working on the spatial (temporal) and frequency (spectral) domain, presenting methods to represent temporal and visual information in static descriptors, as well as approaches to identify relevant patterns in the data (feature descriptors). Methodologies covered include correlation analysis, Fourier analysis, wavelets, intensity gradient-based descriptors and detectors, normalized cut segmentation.
Date Topic References Additional Material 2 19/02/2025
(16-18)Signal processing
Timeseries; time domain analysis (statistics, correlation); spectral analysis; fourier analysis.3 20/02/2025
(14-16)Image Processing I
Spatial feature descriptors (color histograms, SIFT); spectral analysis.Additional readings:
[1] Survey on visual descriptorsSoftware:
A tweakable and fast implementation of SIFT in C (on top of OpenCV)
4 25/02/2025
(11-13)Image Processing II
Feature detectors (edge, blobs); image segmentation;Additional readings
[2] Survey on visual feature detectors
A reference book for the pattern recognition part is " S. THEODORIDIS, K. KOUTROUMBAS, Pattern Recognition, 4th edition". It is not needed for the sake of the course, but it is a reference book if you are interested on the topic. It is not available online for free (legally; what you do with Google is none of my business).
You can find the original NCUT paper freely available from authors here.4b 25/02/2025
(11-13)Image Processing III
Wavelet decompositions -
The module introduces probabilistic learning, causal models, generative modelling and Bayesian learning. We will discuss fundamental algoritms and concepts, including Expectation-Maximization, sampling and variational approximations, and we will study relevant models from the three fundamental paradigms of probabilistic learning, namely Bayesian networks, Markov networks and dynamic models. Models covered include: Bayesian Networks, Hidden Markov Models, Markov Random Fields, Boltzmann Machines, Latent topic models.
Date Topic References Additional Material 5 26/02/2025
(16-18)Introduction to Generative Graphical Models I
Probability refresher[BRML] Ch. 1 and 2 (Refresher) 6 27/02/2025
(14-16)Introduction to Generative Graphical Models II
Graphical model representation; directed and undirected models[BRML] Sect. 3.1, 3.2 and 3.3.1
(conditional independence)Software
- Pyro - Python library based on PyTorch
- PyMC3 - Python library based on Theano
- Edward - Python library based on TensorFlow
- TensorFlow Probability - Probabilistic models and deep learning in Tensorflow
7 04/03/2025
(11-13)Conditional Independence: Representation and Learning - Part I
Bayesian networks; representing joint distributions; conditional independence;Guest lecture by Riccardo Massidda
[BRML] Sect. 3.3 (Directed Models and conditional independence) 05/03/2025
(16-18)LECTURE CANCELLED DUE TO STUDENT ASSEMBLY 8 06/03/2025
(14-16)Conditional Independence: Representation and Learning - Part II
d-separation; Markov properties; faithfulness; Markov modelsGuest lecture by Riccardo Massidda
[BRML] Sect. 4.1, 4.2.0-4.2.2 (Undirected Models and Markov Properties)
[BRML] Sect. 4.5 (Expressiveness)9 11/03/2025
(11-13)Graphical Causal Models
causation and correlation; causal Bayesian networks; structural causal models; causal Inference
Guest lecture by Riccardo Massidda
Barber's book is minimal on causality (only Section 3.4). My suggestions is that you complement the content of the slides (which is sufficient for the exam) with reading from this book, namely:
-Chapters 2 & 3 (high level introduction to causality)
- Sections 6.1-6.5 (more technical discussion on lecture content)
If you are interested in deepening of your knowledge on causality this is an excellent book (also freely available online): Jonas Peters, Dominik Janzing, Bernhard Schölkopf, Elements of causal inference : foundations and learning algorithms, MIT Press 10 12/03/2025
(16-18)Structure Learning and Causal Discovery
constraint-based methods; score-based methods;
parametric assumptionsGuest lecture by Riccardo Massidda
[BRML] Sect. 9.5.1 (PC algorithm)
[BRML] Sect. 9.5.2 (Independence testing)
[BRML] Sect. 9.5.3 (Structure scoring)Additional readings
[3] A short review of BN structure learning
[4] PC algorithm with consistent ordering for large scale data
[5] MMHC - Hybrid structure learning algorithmSoftware
- A selection of BN structure learning libraries in Python: pgmpy, bnlearn, pomegranate.
- bnlearn: the most consolidated and efficient library for BN structure learning (in R)
- Causal learner: a mixed R-Matlab package integrating over 26 BN structure learning algorithms.11 13/03/2025
(14-16)Hidden Markov Models - Part I
learning in directed graphical models; generative models for sequential data; hidden/latent variables; inference problems on sequential data[BRML] Sect. 23.1.0 (Markov Models) Additional Readings
[6] A classical tutorial introduction to HMMs14/03/2025
(14-16)RECOVERY LECTURE CANCELLED DUE TO HYDROLOGICAL RISK 12 18/03/2025
(11-13)Hidden Markov Models - Part II
forward-backward algorithm; learning as inference; EM algorithm[BRML] Sect. 23.2.0-23.2.4 (HMM and forward backward)
[BRML] Sect. 23.3.1-23.3.4 (EM and learning)
13 19/03/2025
(16-18)Hidden Markov Models - Part III
Viterbi algorithm; dynamic bayesian networks[BRML] Sect. 23.2.6 (Viterbi) 14 20/03/2025
(14-16)Markov Random Fields I
learning in undirected graphical models;[BRML] Sect. 4.2.2, 4.2.5 (MRF)
[BRML] Sect. 4.4 (Factor Graphs)
[BRML] Sect. 5.1.1 (Variable Elimination and Inference on Chain)
15 21/03/2025
(14-16)ROOM L1
Markov Random Fields II
conditional random fields; pattern recognition applicationsRECOVERY LECTURE
[BRML] Sect. 9.6.0, 9.6.1, 9.6.4, 9.6.5 (Learning in MRF/CRF) Additional Readings
[7,8] Two comprehensive tutorials on CRF ([7] more introductory and [8] more focused on vision)
[9] A nice application of CRF to image segmentation
Sofware
- Check out pgmpy: it has Python notebooks to introduce to working with MRF/CRF
- An interesting tutorial on implementing linear CRFs
16 25/03/2025
(11-13)Bayesian Learning I
Principles of Bayesian learning; EM algorithm objective; principles of variational approximation; latent topic models;BRML] Sect. 11.2.1 (Variational EM) 17 26/03/2025
(16-18)Bayesian Learning II
Latent Dirichlet Allocation (LDA); LDA learning; machine vision application of latent topic models;[BRML] Sect. 20.4-20.6.1 (LDA) Additional Readings
[10] LDA foundation paper
[11] A gentle introduction to latent topic models
[12] Foundations of bag of words image representation
Sofware
- A didactic Matlab demo of bag-of-words for images
- The official Matlab LDA implementation
- A chatty demo on BOW image representation in Python
- Yet another Python implementation of image BOW
18 27/03/2025
(14-16)Bayesian Learning III
sampling methods; ancestral sampling; Gibbs sampling[BRML] Sect. 27.1 (sampling), Sect. 27.2 (ancestral sampling), Sect. 27.3 (Gibbs sampling) Additional Readings
[13] A step-by-step derivation of collapsed Gibbs sampling for LDA01/04/2025
(11-13)NO LECTURE (Instructor not available)
Will be recovered on April 11th, h. 14.00
19 02/04/2025
(16-18)Boltzmann Machines
bridging neural networks and generative models; stochastic neuron; restricted Boltzmann machine; contrastive divergence and Gibbs sampling in useAdditional Readings
[14] A clean and clear introduction to RBM from its author
Sofware
Matlab code for Deep Belief Networks (i.e. stacked RBM) and Deep Boltzmann Machines. -
The module presents the fundamental concepts, challenges, architectures and methodologies of deep learning. We introduce the learning of neural representations from vectorial, sequential and image data, covering both supervised and unsupervised learning, and hinting at various forms of weak supervision. Models covered include: deep autoencoders, convolutional neural networks, long-short term memory, gated recurrent units, advanced recurrent architectures, sequence-to-sequence, neural attention, Transformers, neural Turing machines. Methodological lectures will be complemented by introductory seminars to Keras-TF and Pytorch.
Date Topic References (NEW) Additional Material 20 03/04/2025 Deep Autoencoders
Sparse, denoising and contractive AE; deep RBM[SD] Coverage of the Prince book on this lecture is inadequate but you can use the lecture slides and complement with the additional material if necessary. (e.g. chapter 14 of the deep learning book). Additional Readings
[15] DBN: the paper that started deep learning
[16] Deep Boltzmann machines paper
[17] Review paper on deep generative models
[18] Long review paper on autoencoders from the perspective of representation learning
[19] Paper discussing regularized autoencoder as approximations of likelihood gradient21 08/04/2025
(11-13)Convolutional Neural Networks I
Introduction to the deep learning module; introduction to CNN; basic CNN elements[SD] Chapter 10 Additional Readings
[20-24] Original papers for LeNet, AlexNet, VGGNet, GoogLeNet and ResNet.
22 09/04/2025
(16-18)Convolutional Neural Networks II
CNN architectures for image recognition; convolution visualization; advanced topics (deconvolution, dense nets); applications and code
[SD] Chapter 10Additional Readings
[25] Complete summary of convolution arithmetics
[26] Seminal paper on batch normalization
[27] CNN interpretation using deconvolutions
[28] CNN interpretation with GradCAM[29] Seminal paper on dilated convolutions
[30] Object detection by Faster RCNN
23 10/04/2025
(14-16)Gated Recurrent Networks I
Deep learning for sequence processing; gradient issues;Coverage of Prince book on this lecture is inadequate (for reasons I do not understand). You can use the course slides for this topic, and if you like you can integrate those with chapter 10 from the Deep Learning Book. Additional Readings
[31] Paper describing gradient vanish/explosion24 11/04/2025
(14-16)
ROOM D3Gated Recurrent Networks II
long-short term memory; gated recurrent units; generative use of RNN
RECOVERY LECTUREAdditional Readings
[32] Original LSTM paper
[33] An historical view on gated RNN[34] Gated recurren units paper
[35] Seminal paper on dropout regularizationSoftware
- A simple introduction to generative use of LSTM
- The up-to-date implementation of NeuraTalk
25 15/04/2025
(11-13)Attention-based architectures
sequence-to-sequence; attention modules; transformers and vision transformers[SD] Chapter 12 Additional Readings
[36,37] Models of sequence-to-sequence and image-to-sequence transduction with attention
[38] Seminal paper on Transformers
[39] Transformers in vision26 16/04/2025
(16-18)Coding practice I - Guest lecture by Riccardo Massidda
Pytorch
27 17/04/2025
(14-16)Coding practice II - Guest lecture by Riccardo Massidda
Keras/TensorFlow
18/04/2025 - 25/04/2025 Spring Break: No Lectures
28 29/04/2025
(11-13)Memory-based models
multiscale network; hierarchical models; memory networks; neural Turing machines -
We close the gap between neural networks and probabilistic learning by discussing generative deep learning models. We discuss a general taxonomy of the existing learning models and study in-depth relevant families of models for each element of the taxonomy, including: autoregressive generation, variational autoencoders, generative adversarial networks, diffusion models, flow-based methods.
Date Topic References Additional Material 29 30/04/2025
(16-18)Explicit Density Learning
explicit distribution models; neural ELBO; variational autoencoders
[SD] Chapter 14 (generative learning), Chapter 17 (VAE)Additional Readings
[40] PixelCNN - Explict likelihood model
[41] Tutorial on VAE
Sofware
30 06/05/2025
(11-13)Implicit models - Adversarial Learning
generative adversarial networks; wasserstein GANs; conditional generation; notable GANs; adversarial autoencoders[SD] Chapter 15 Additional Readings
[42] Tutorial on GAN (here another online resource with GAN tips)
[43] Wasserstein GAN
[44] Tutorial on sampling neural networks
[45] Progressive GAN
[46] Cycle Gan
[47] Seminal paper on Adversarial AEs
Sofware
- Official Wasserstein GAN code
- A (long) list of GAN models with (often) associated implementation
31 07/05/2025
(16-18)Diffusion models I
noising-denoising processes; kernelized diffusion;[SD] Chapter 18 Additional Readings
[48] Introductory and survey paper on diffusion models
[49] Seminal paper introducing diffusion models
[50] An intepretation of diffusion models as score matching
[51] Paper introducing the diffusion model reparameterization
[52] Diffusion beats GAN paper32 08/05/2025
(14-16)Diffusion models II
latent space diffusion; conditional diffusion models33 13/05/2025
(11-13)Normalizing flow models
probabilistic change of variable, forward/normalization pass; from 1D to multidimensional flows; survey of notable flow models; wrap-up of deep generative learning
[SD] Chapter 16 -
Typical course examination (for students attending the lectures) is performed in 2 stages: midterm assignments and an oral exam. Midterms waive the final project.
Midterm Assignment
Midterms consist in short assignments involving with one of the following tasks:
- A quick and dirty (but working) implementation of a simple pattern recognition algorithm
- A report concerning the experience of installing and running a demo application realized using available deep learning and machine learning libraries
- A summary of a recent research paper on topics/models related to the course content.
The midterms can consist in either the delivery of code (e.g. colab notebook) or a short slide deck (no more than 10 slides) presenting the key/most-interesting aspects of the assignment.
Students might be given some amount of freedom in the choice of assignments, pending a reasonable choice of the topic. The assignments will roughly be scheduled every 3/4 weeks.
Oral Exam
The oral examination will test knowledge of the course contents (models, algorithms and applications).
Exam Grading (with Midterms)
The final exam vote is given by the oral grade. The midterms only wave the final project but do not contribute to the grade. In other words you can only fail or pass a midterm. You need to pass all midterms in order to succesfully wave the final project.
Alternative Exam Modality (No Midterms / Non attending students)
Working students, those not attending lectures, those who have failed midterms or simply do not wish to do them, can complete the course by delivering a final project and an oral exam. Final project topics will be released in the final weeks of the course: contact the instructor by mail to arrange choice of the topics once these are published.
The final project concerns preparing a report on a topic relevant to the course content or the realization of a software implementing a non-trivial learning model and/or a PR application relevant for the course. The content of the final project will be discussed in front of the instructor and anybody interested during the oral examination. Students are expected to prepare slides for a 15 minutes presentation which should summarize the ideas, models and results in the report. The exposition should demonstrate a solid understanding of the main ideas in the report.
Grade for this exam modality is determined as
\( G = 0.5 \cdot (G_P + G_O) \)
where \( G_P \in [1,30] \) is the project grade and \( G_O \in [1,32] \) is the oral grade
-
-
Due: Friday, 21 March 2025, 6:00 PM
-
Due: Tuesday, 22 April 2025, 2:00 PM
-
Due: Tuesday, 20 May 2025, 2:00 PM
-
Due: Monday, 3 March 2025, 2:00 PM
-
-
- Scott Krigg, Interest Point Detector and Feature Descriptor Survey, Computer Vision Metrics, pp 217-282, Open Access Chapter
- Tinne Tuytelaars and Krystian Mikolajczyk, Local Invariant Feature Detectors: A Survey, Foundations and Trends in Computer Graphics and Vision, Vol. 3, No. 3 (2007) 177–2, Online Version
- C. Glymour, Kun Zhang and P. Spirtes, Review of Causal Discovery Methods Based on Graphical Models Front. Genet. 2019, Online version
- Bacciu, D., Etchells, T. A., Lisboa, P. J., & Whittaker, J. (2013). Efficient identification of independence networks using mutual information. Computational Statistics, 28(2), 621-646, Online version
- Tsamardinos, I., Brown, L.E. & Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach Learn 65, 31–78 (2006), Online version
- Lawrence R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989, pages 257-286, Online Version
- Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields, Arxiv
- Sebastian Nowozin and Christoph H. Lampert, Structured Learning and Prediction, Foundations and Trends in Computer Graphics and Vision, Online Version
- Philipp Krahenbuhl, Vladlen Koltun, Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, Proc.of NIPS 2011, Arxiv
- D. Blei, A. Y. Ng, M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 2003
- D. Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012, Free Online Version
- G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray. Visual Categorization with Bags of Keypoints. Workshop on Statistical Learning in Computer Vision. ECCV 2004, Free Online Version
- W. M. Darling, A Theoretical and Practical Implementation Tutorial on Topic Modeling and Gibbs Sampling, Lecture notes
- Geoffrey Hinton, A Practical Guide to Training Restricted Boltzmann Machines, Technical Report 2010-003, University of Toronto, 2010
- G.E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science 313.5786 (2006): 504-507, Free Online Version
- G.E. Hinton, R. R. Salakhutdinov. Deep Boltzmann Machines. AISTATS 2009, Free online version.
-
R. R. Salakhutdinov. Learning Deep Generative Models, Annual Review of Statistics and Its Application, 2015, Free Online Version
- Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 35(8) (2013): 1798-1828, Arxiv.
- G. Alain, Y. Bengio. What Regularized Auto-Encoders Learn from the Data-Generating Distribution, JMLR, 2014.
- Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel. Handwritten digit recognition with a back-propagation network, Advances in Neural Information Processing Systems,
NIPS, 1989 - A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, NIPS, 2012
- S. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition, ICLR 2015, Free Online Version
- C. Szegedy et al, Going Deeper with Convolutions, CVPR 2015, Free Online Version
- K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. CVPR 2016, Free Online Version
- V. Dumoulin, F. Visin, A guide to convolution arithmetic for deep learning, Arxiv
- S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, ICML 2013, Arxiv
- M.D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks, ICML 2013, Arxiv
- J. Adebayo et al, Sanity Checks for Saliency Maps, NeurIPS, 2018
- F. Yu et al, Multi-Scale Context Aggregation by Dilated Convolutions, ICLR 2016, Arxiv
- S. Ren et al, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, NeurIPS 2015
- Y. Bengio, P. Simard and P. Frasconi, Learning long-term dependencies with gradient descent is difficult. TNN, 1994, Free Online Version
- S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, 1997, Free Online Version
- K. Greff et al, LSTM: A Search Space Odyssey, TNNLS 2016, Arxiv
- C. Kyunghyun et al, Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, EMNLP 2014, Arxiv
- N. Srivastava et al, Dropout: A Simple Way to Prevent Neural Networks from Overfitting, JLMR 2014
- Bahdanau et al, Neural machine translation by jointly learning to align and translate, ICLR 2015, Arxiv
- Xu et al, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, ICML 2015, Arxiv
- A. Vaswan et al, Attention Is All You Need, NIPS 2017, Arxiv
- A. Dosovitskiy et al, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021
- A. van der Oord et al., Pixel Recurrent Neural Networks, 2016, Arxiv
- C. Doersch, A Tutorial on Variational Autoencoders, 2016, Arxiv
- Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks, 2016, Arxiv
- Arjovsky et al, Wasserstein GAN, 2017, Arxiv
- T. White, Sampling Generative Network, NIPS 2016, Arxiv
- T. Karras et al, Progressive Growing of GANs for Improved Quality, Stability, and Variation, ICLR 2018, Arxiv
- Jun-Yan Zhu et al, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, ICCV 2017 Arxiv
- Alireza Makhzani et al, Adversarial Autoencoders, NIPS 2016, Arxiv
- Ling Yang et al, Diffusion Models: A Comprehensive Survey of Methods and Applications, 2023, Arxiv
- Jascha Sohl-Dickstein et al, Deep Unsupervised Learning using Nonequilibrium Thermodynamics, ICML 2015, PDF
- Y. Song & S. Ermon, Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2019, PDF
- Jonathan Ho et al, Denoising Diffusion Probabilistic Models, NeurIPS 2020, Arxiv
- P. Dhariwal & A. Nichol, Diffusion Models Beat GANs on Image Synthesis, NeurIPS 2021, PDF