Final Projects (2024)
In the following you can find a list of final project topics, partitioned into survey works (requiring reading and summarization of a number of scientific articles on a theme/model) and coding works (requiring implementing a method/application from literature). Some of the coding works might involve a higher level of complexity, as they encompass an original modification of an existing model. Coding projects can be developed in team, provided that the project is of sufficient complexity and that the contribution of the different team members is clearly marked.
Students are also welcome to propose their topic, but the proposal should come with the following associated information:
- Survey project: a list of 3/4 scientific articles on the topic of interest
- Coding project: a reference article detailing the model to be implemented plus references to the data to be used for training/testing the model.
The following lists might be updated as-we-go.
Survey Projects
- Advanced Hidden Markov Models – Survey of modern hidden Markov models, including Bayesian non-parametric extensions and applications to structured data.
- Latent topic models - A survey of latent topic models covering either: different learning and inference algorithms; architectural variants (hierarchical topic models, time varying topics, etc.); different machine vision applications of topic models.
- Markov random fields - A survey of MRF of different nature (e.g. different applications to vision, different message passing inference, ...)
- Deep learning for graphs – Survey of recent approaches to structured data processing, including the extension of CNN to network data, CNN for variable sized graphs.
- Deep
learning for dynamic graphs – Survey
of recent approaches to learning with spatio-temporal and time-varying graphs
- Generative learning for graphs - Survey of recent approaches to the generation of graph structured predictions and their applications
- Information propagation in deep graph networks – Survey of approches studying how to enhance message passing and long-range information propagation in deep learning models for graphs.
- Graph reduction and pooling – Survey of recent works focusing on presenting pooling mechanisms for graph structured data
- Memory Enhanced Neural Networks - Survey of neural models with multi-scale memory or external memory components.
- Neural reasoning – Survey of recent works on reasoning and pondering in deep models, e.g. Neural Turing Machines, adaptive computation time networks, pondering networks.
- Algorithmic reasoning – Survey
of recent works on algorithmic reasoning and learning for optimization problems through deep learning for graphs.
- Deep
reinforcement learning - A survey covering foundational as well as more recent deep models for
reinforcement learning
- Continual learning in reinforcement learning - A survey of applications of continual learning to reinforcement learning tasks
- Federated and distributed learning - A survey of non-local learning methodologies, possibly on a focused subtopic (e.g. continual + federated, federated reservoir computing, ....)
- Generative Models -Adversarial – Survey comparing some notable recent GAN models from a theoretical, robustness and/or representational perspective.
- Generative Models - Variational – A survey of VAE-based approaches to learn complex distributions exploiting neural models and the reparameterization trick.
- Generative Models - Diffusion - A survey of generative learning via diffusion models
- Generative Models - flow - A survey of generative learning via flow models
- Generative Models - Comparative - A comparative survey on diffusion models, normalizing flow and energy based models
- Generative models for images - A survey of recent state of the art methods in image generation (also conditioned on text)
- Adversarial attacks – Review of the main approaches to perform adversarial attacks to neural networks.
- Reliable Deep Learning – Review of recent works on defending neural networks from adversarial attacks.
- Deep learning for music – Survey on deep learning approaches to music representation, generation and style disentanglement.
- Representation learning and disentanglement – A survey of recent approaches addressing representation learning and learning disentangled representations
- Causal learning – A survey
on one subtopic of causal learning: e.g. learning structured causal models, BN structure learning, continuous structure learning models, causal models for timeseries, ...
Coding Projects
- Deep learning for graphs libraries – Implement and empirically confront a couple of variants of neural networks for graphs using one of the available libraries (Pytorch Geometric, DLG).
- CNN for brain imaging – Implement a CNN to perform semantic segmentation on brain MR images.
- Neural Turing Machine – Try to create your personal implementation of the NTM and train it on the classical NTM benchmark data.
- Neural Algorithmic Reasoning - Learn how to introduce algorithmic knowledge on a graph neural network (using benchmark algorithm traces available in literature)
- Emotion recognition – Implement an emotion recognition application that connects a pre-trained deep learning model for emotion recognition from images to the webcam and classifies the stream of frames captured by the cam according to the emotion shown by the people in the frame.
- Machine vision for autonomous vehicles – Implement a simple machine vision application exploiting training data available from an autonomous vehicle manufacturer and using one (or more than one) of the technologies seen in the course (CNN, CRF, etc).
- Music
generation –
Implement a music generation deep neural network leveraging one of the generative approaches discussed during the lectures
- Object detection in videos – Use Keras, OpenCV, ImageAI or whatever framework you are comfortable with to create a simple application performing object detection in videos (e.g. from youtube of from your webcam). Use a pretrained network for the task (suggestion: YOLO).
- Deep Reinforcement Learning – Implement a neural-based reinforcement learning agent (e.g. DQN) and experiment training it on one of the environments available in the Gymnasium.
- Benchmarking continual learning in Avalanche - Experiment with a selection of continual learning strategies implemented in Avalanche (on a benchmark dataset, possibly of your choice).
- Extending continual learning in Avalanche - Extend the Avalanche library with a continual learning strategy that is currently unsupported.
- Structure learning - Esperiment with some combinatorial and/or continous optimization approach to learn Bayesian network structures or causal models
- Hyperspectral space images - Create a solution for the ESA Challenge on hyperspectral images regression challenge (create a reasonable solution; top accuracy is not required for the exam)
- Hyperspectral space images - Create a solution for the ESA Challenge on enhanced agriculture (create a reasonable solution; top accuracy is not required for the exam)
- Image generation - Train at least two types of diffusion models among those seen during the lectures (exact, GANs, VAE, diffusion, flow) on an image dataset and confront their performance