Midterm 4 (2021)
Completion requirements
DO NOT SUBMIT THE MIDTERM 4 HERE: DO IT ON THE MOODLE ASSIGNMENT CORRESPONDING TO THE APPELLO YOU ARE INTENDING TO TAKE.
DO NOT SUBMIT THE MIDTERM 4 HERE: DO IT ON THE MOODLE ASSIGNMENT CORRESPONDING TO THE APPELLO YOU ARE INTENDING TO TAKE.
Assignment Rules and Execution
The fourth midterm covers the advanced deep learning models and topics. To pass the midterm you should
- prepare a short presentation describing the content of one of the papers referenced below and upload it by the (strict) exam deadline of the Appello of your choice;
- give your short presentation before the oral exam.
The midterm presentation MUST take a maximum of 5 minutes and should include a maximum of 5/6 slides, whose content should cover:
- A title slide with the paper title and your name
- Introduction to the problem in the paper
- Model description
- Empirical results (a summary or the most interesting one)
- A final slide with your personal considerations (novelties, strong points and weaknesses)
Paper list
- Hierarchical Multiscale RNN - arxiv.org/pdf/1609.01704.pdf
- Neural Stacks - http://papers.nips.cc/paper/5648-learning-to-transduce-with-unbounded-memory.pdf
- Neural reasoning – arxiv.org/pdf/1610.07647.pdf
- Adaptive computation time networks - arxiv.org/pdf/1603.08983.pdf
- Deep reservoir computing - www.sciencedirect.com/science/article/pii/S0925231217307567
- Linear memory networks - www.sciencedirect.com/science/article/pii/S0925231221005932
- Continual learning (progressive memories) - arxiv.org/pdf/1811.00239.pdf
- Continual learning (elastic weight consolidation) - arxiv.org/pdf/1612.00796.pdf
- Continual learning (sequential processing tasks) - arxiv.org/pdf/2004.04077.pdf
- Continual learning (generative replay) - http://papers.nips.cc/paper/6892-continual-learning-with-deep-generative-replay.pdf
- Continual learning (dataset distillation) - arxiv.org/pdf/2103.15851.pdf
- Neural distribution learning - openreview.net/pdf?id=HJDBUF5le
- Wasserstein GAN - arxiv.org/abs/1701.07875v2
- Cycle GAN - arxiv.org/pdf/1703.10593.pdf
- Gumbel GAN (learning to generate discrete objects) - arxiv.org/pdf/1611.04051.pdf
- Creative GANs for art generation - arxiv.org/abs/1706.07068
- Adversarial Autoencoders - arxiv.org/pdf/1511.05644.pdf
- Wasserstein Autoencoders - arxiv.org/pdf/1711.01558.pdf
- Adversarial Autoencoder for music generation - arxiv.org/abs/2001.05494
- Adversarial Attacks - arxiv.org/pdf/1608.04644.pdf
- Defences against adversarial attacks – arxiv.org/pdf/1702.04267.pdf
- Convolutional NN for video processing - arxiv.org/pdf/1711.10305.pdf
- Deep learning for graphs: Survey - arxiv.org/pdf/1912.12693.pdf
- Deep learning for graphs: Theoretical - arxiv.org/pdf/1810.00826.pdf
- Deep learning for graphs: Probabilistic Model - www.jmlr.org/papers/volume21/19-470/19-470.pdf
- Deep learning for graphs: Molecule Generation - arxiv.org/pdf/2002.12826.pdf
- Differentiable Pooling in Graph Convolutional Neural Networks – arxiv.org/abs/1806.08804
- CNN for DNA processing - dx.doi.org/10.1093%2Fbioinformatics%2Fbtw255
- Learning Bayesian Networks from COVID-19 data - arxiv.org/pdf/2105.06998.pdf
- Deep learning for robot grasping – arxiv.org/pdf/1301.3592.pdf
- Deep reinforcement learning for robotics - arxiv.org/pdf/1504.00702.pdf
- Deep reinforcement learning (AlphaGo) - www.nature.com/articles/nature16961
- Deep reinforcement learning with external memory - arxiv.org/pdf/1702.08360.pdf
- Theoretical properties of Stochastic Gradient Descent – arxiv.org/pdf/1710.11029.pdf
- Convergence and generalization of Neural Networks - https://arxiv.org/pdf/1806.07572.pdf