Topic outline

  • Intelligent Systems for Pattern Recognition - 9 CFU

    Code: 760AA, Credits (ECTS): 9, Semester: 2, Official Language: English

    Instructor: Davide Bacciu 

    Contact: email - phone 050 2212749

    Office: Room 367, Dipartimento di Informatica, Largo B. Pontecorvo 3, Pisa

    Office Hours: (email to arrange meeting)

    Supporting InstructorAntonio Carta (email)

  • Course Information

    Weekly Schedule

    The course is held on the second term. The tentative schedule for A.A. 2021/22 is provided in table below.

    The first lecture of the course will be on Tuesday 15/02/2022. The course will be hybrid, both in person and online on the dedicated MS Team.

    Recordings of the lectures will be made available to the students following the course.

    Day Time
    Tuesday 14.15-16.00 (Room D1, Teams meeting)
    Wednesday 16.15-18.00 (Room A1, Teams meeting)
    Thursday 14.15-16.00 (Room A1, Teams meeting)


    Objectives

    Course Prerequisites

    Course prerequisites include knowledge of machine learning fundamentals (e.g. covered through ML course). Knowledge of elements of probability and statistics, calculus and optimization algorithms are also expected. Previous programming experience with Python is a plus for the practical lectures.

    Course Overview

    The course introduces students to the analysis and design of advanced machine learning and deep learning models for modern pattern recognition problems and discusses how to realize advanced applications exploiting computational intelligence techniques.

    The course is articulated in five parts. The first part introduces basic concepts and algorithms concerning traditional pattern recognition, in particular as pertains sequence and image analysis. The next two parts introduce advanced models from two major learning paradigms, that are deep neural networks and generative models, and their use in pattern recognition applications. The fourth part will introduce fundamentals of reinforcement learning and deep reinforcement learning. The final part of the course will present selected recent works, models and applications of learning models.

    Presentation of the theoretical models and associated algorithms will be complemented by introductory classes on the most popular software libraries used to implement them.

    The course hosts guest seminars by national and international researchers working on the field as well as by companies that are engaged in the development of advanced applications using machine learning models.

    The official language of the course is English: all materials, references and books are in English. Lecture slides will be made available here, together with suggested readings.

    Topics covered -Bayesian learning, graphical models, learning with sampling and variational approximations, fundamentals of deep learning (CNNs, AE, DBN, GRNs), deep learning for machine vision and signal processing, advanced deep learning models (transformers, VAE, GANs, NTMs), deep graph networks, reinforcement learning and deep reinforcement learning, signal processing and time-series analysis, image processing, filters and visual feature detectors, pattern recognition applications (machine vision, bio-informatics, robotics, medical imaging, etc), introduction to programming libraries and frameworks.


    Textbooks and Teaching Materials

    The course does not have an official textbook covering all its contents. However, a list of reference books covering parts of the course is listed at the bottom of this section (note that all of them have an electronic version freely available online).

    [BRML] David Barber, Bayesian Reasoning and Machine Learning, Cambridge University Press (PDF)

    [DL] Ian Goodfellow and Yoshua Bengio and Aaron Courville , Deep Learning, MIT Press (ONLINE)

    [RL] Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, Second Edition, MIT Press, 2018 (PDF)


    • Introduction (2h)

      Introduction to the course philosophy, its learning goals and expected outcomes. We will discuss prospectively the overall structure of the course and the interelations between its parts. Exam modalities and schedule are also discussed.
      Date Topic  References  
       Additional Material 
      1 15/02/2022
      (14-16)
      Introduction to the course
      Motivations and aim; course housekeeping (exams, timetable, materials); introduction to modern pattern recognition applications

    • Fundamentals of Pattern Recognition (6h)

      The module will provide a brief introduction to classical pattern recognition for signal/timeseries and for images. We will cover approaches working on the spatial (temporal) and frequency (spectral) domain, presenting methods to represent temporal and visual information in static descriptors, as well as approaches to identify relevant patterns in the data (feature descriptors). Methodologies covered include correlation analysis, Fourier analysis, wavelets, intensity gradient-based descriptors and detectors, normalized cut segmentation.

      Date Topic  References  
       Additional Material 
      2
      16/02/2022
      (16-18)
      Signal processing
      Timeseries; time domain analysis (statistics, correlation); spectral analysis; fourier analysis.


      3 17/02/2022
      (14-16)
       Image Processing I
       Spatial feature descriptors (color histograms, SIFT); spectral analysis.
        Additional readings
       [1] Survey on visual descriptors
       4 22/02/2022
      (14-16)
       Image Processing II
       Feature detectors (edge, blobs); image segmentation; wavelet decompositions
        Additional readings 
      [2] Survey on visual feature detectors

      Software

      A wavelet browser to visualize some popular wavelet families and their instances, powered by the PyWavelet library.

    • Generative Learning (20h)

      The module introduces probabilistic learning, causal models, generative modelling and Bayesian learning. We will discuss fundamental algoritms and concepts, including Expectation-Maximization, sampling and variational approximations, and we will study relevant models from the three fundamental paradigms of probabilistic learning, namely Bayesian networks, Markov networks and dynamic models.  Models covered include: Bayesian Networks, Hidden Markov Models, Markov Random Fields, Boltzmann Machines,  Latent topic models.


      Date Topic  References  
       Additional Material 
        5 
      23/02/2022
      (16-18)
      Introduction to Generative and Graphical Models.
      P
      robability refresher; graphical model representation; directed and undirected models
      [BRML] Ch. 1 and 2 (Refresher)
      [BRML] Sect. 3.1, 3.2 and 3.3.1
      (conditional independence)
      Software
      • Pyro - Python library based on PyTorch
      • PyMC3 - Python library based on Theano
      • Edward - Python library based on TensorFlow
      • TensorFlow Probability - Probabilistic models and deep learning in Tensorflow
        6 
      24/02/2022
      (14-16)
      Conditional independence and causality - Part I
      Bayesian networks; Markov networks; conditional independence;

      [BRML] Sect. 3.3 (Directed Models)
      [BRML] Sect. 4.1, 4.2.0-4.2.2 (Undirected Models)
      [BRML] Sect. 4.5 (Expressiveness)

        7
      01/03/2022
      (14-16)
      Conditional independence and causality - Part II
      d-separation; structure learning in Bayesian Networks
      [BRML] Sect. 9.5.1 (PC algorithm)
      [BRML] Sect. 9.5.2 (Independence testing)
      [BRML] Sect. 9.5.3 (Structure scoring)
      Additional readings
      [3] A short review of BN structure learning
      [4] PC algorithm with consistent ordering for large scale data
      [5] MMHC - Hybrid structure learning algorithm

      Software
      - A selection of BN structure learning libraries in Python: pgmpy, bnlearn, pomegranate.
      - bnlearn: the most consolidated and efficient library for BN structure learning (in R)
      - Causal learner: a mixed R-Matlab package integrating over 26 BN structure learning algorithms.

        8
      02/03/2022
      (16-18)
      Hidden Markov Models  - Part I
      learning in directed graphical models; forward-backward algorithm;  generative models for sequential data
      [BRML] Sect. 23.1.0 (Markov Models)
      [BRML] Sect. 23.2.0-23.2.4 (HMM and forward backward) 
      Additional Readings
      [6]  A classical tutorial introduction to HMMs
        9
      03/03/2022
      (14-16)
      Hidden Markov Models - Part II
      EM algorithm, learning as inference, Viterbi Algorithm
      [BRML] Sect. 23.2.6 (Viterbi)
      [BRML] Sect. 23.3.1-23.3.4 (EM and learning)

        10
      08/03/2022
      (14-16)
      HMM III + Markov Random Fields
      learning in undirected graphical  models; conditional random fields; pattern recognition applications

      [BRML] Sect. 4.2.2, 4.2.5 (MRF)
      [BRML] Sect. 4.4 (Factor Graphs)
      [BRML] Sect. 5.1.1 (Variable Elimination and Inference on Chain) 
      [BRML] Sect. 9.6.0, 9.6.1, 9.6.4, 9.6.5 (Learning in MRF/CRF)
      Additional Readings
      [7,8] Two comprehensive tutorials on CRF ([7] more introductory and [8] more focused on vision)
      [9] A nice application of CRF to image segmentation

      Sofware
      Check out pgmpy: it has Python notebooks to introduce to working with MRF/CRF
        11
      09/03/2022
      (16-18)
      Bayesian Learning I
      Principles of Bayesian learning; EM algorithm objective; principles of variational approximation
      [BRML] Sect. 11.2.1 (Variational EM)
        12
      10/03/2022
      (14-16)
      Bayesian Learning II
      latent topic models; Latent Dirichlet Allocation; machine vision application of latent topic models
      [BRML] Sect. 20.4-20.6.1  (LDA) Additional Readings
      [10] LDA foundation paper
      [11] A gentle introduction to latent topic models
      [12] Foundations of bag of words image representation
      Sofware
        13
      14/03/2022
      (11-13)
      ROOM C1
      Bayesian Learning III
      sampling methods; ancestral sampling; Gibbs sampling and Monte Carlo methods

      Guest lecture by Daniele Castellana 

      [BRML] Sect. 27.1-27.3, 27.4.1, 27.6.2 Additional Readings
      [13] A step-by-step derivation of collapsed Gibbs sampling for LDA
        14
      15/03/2022
      (14-16)
      Boltzmann Machines
      bridging neural networks and generative models; stochastic neuron; restricted Boltzmann machine; contrastive divergence
       [DL] Sections 20.1 and 20.2 Additional Readings
      [14] A clean and clear introduction to RBM from its author

      Sofware
      Matlab code for Deep Belief Networks (i.e. stacked RBM) and Deep Boltzmann Machines.
       

    • Deep Learning (24h)

      The module presents the fundamental concepts, challenges, architectures and methodologies of deep learning. We introduce the learning of neural representations from vectorial, sequential and image data, covering both supervised and unsupervised learning, and hinting at various forms of weak supervision. We close the gap between neural networks and probabilistic learning by discussing generative deep learning models. Models covered include: deep autoencoders, convolutional neural networks, long-short term memory, gated recurrent units, deep reservoir computing, sequence-to-sequence, neural attention, neural Turing machines, variational autoencoders, generative adversarial networks. Methodological lectures will be complemented by introductory seminars to Keras-TF and Pytorch.

      Date Topic  References  
       Additional Material 
      15
      16/03/2022
      (16-18)
      Convolutional Neural Networks I
      Introduction to the deep learning module; i
      ntroduction to CNN; basic CNN elements
      [DL] Chapter 9
      Additional Readings
      [15-19] Original papers for LeNet, AlexNet, VGGNet, GoogLeNet and ResNet.
      16
      17/03/2022
      (14-16)
       Convolutional Neural Networks II
      CNN architectures for image recognition; convolution visualization; advanced topics (deconvolution, dense nets); applications and code
       [DL] Chapter 9 Additional Readings
      [20] Complete summary of convolution arithmetics
      [21] Seminal paper on batch normalization
      [22] CNN interpretation using deconvolutions
      [23] CNN interpretation with GradCAM
        22/03/2022
      (14-16)
      LECTURE CANCELLED (RECOVERED ON THE 14th MARCH)
       
         23/03/2022
      (16-18)
       LECTURE CANCELLED (RECOVERED ON THE 25th MARCH)    
         24/03/2022
      (14-16)
       LECTURE CANCELLED (RECOVERED ON THE 28th MARCH)    
       17  25/03/2022
      (14-16)
      ROOM D1 + ONLINE
       Deep Autoencoders
      S
      parse, denoising and contractive AE; deep RBM
       [DL] Chapter 14, Sect 20.3, 20.4.0 (from 20.4.1 onwards not needed)  Additional Readings
      [24] DBN: the paper that started deep learning
      [25] Deep Boltzmann machines paper
      [26] Review paper on deep generative models
      [27] Long review paper on autoencoders from the perspective of representation learning
      [28] Paper discussing regularized autoencoder as approximations of likelihood gradient
       18  28/03/2022
      (16-18)
      ROOM C
       Gated Recurrent Networks I
      Deep learning for sequence processing; gradient issues;
       [DL] Sections 10.1-10.3, 10.5-10.7, 10.10, 10.11  Additional Readings
      [29] Paper describing gradient vanish/explosion
      [30] Original LSTM paper
      [31] An historical view on gated RNN
      [32] Gated recurren units paper
      [33] Seminal paper on dropout regularization
       19  29/03/2022
      (14-16)
       Gated Recurrent Networks II
      long-short term memory; gated recurrent units; generative use of RNN
         Sofware
       20 30/03/2022
      (16-18)
       Coding practice I - Tensorflow
         
       21 31/03/2022
      (14-16) 
       Coding practice II - PyTorch
         
       22  05/04/2022
      (14-16)
      Deep Randomized Networks - Guest lecture by Claudio Gallicchio
      reservoir computing; randomized models; echo state networks
         
       23  06/04/2022
      (16-18)
      Advanced Recurrent Architectures and Attention
      sequence-to-sequence;  attention models; multiscale network; hierarchical models
       [DL] Sections 10.12, 12.4.5  Additional Readings
      [34,35] Models of sequence-to-sequence and image-to-sequence transduction with attention
      [36,37] Models optimizing dynamic memory usage (clockwork RNN, zoneout)
      [38] Transformer networks: a paper on the power of attention without recurrence
       24  07/04/2022
      (14-16)
      Neural Reasoning
      memory networks; neural Turing machines
         Additional Readings
      [39] Differentiable memory networks
      [40,41] Neural Turing Machines and follow-up paper on pondering networks
       25  12/04/2022
      (14-16)
      Unsupervised and Generative Deep Learning I
      explicit distribution models; neural ELBO; variational autoencoders
      [DL] Sections 20.9, 20.10.1-20.10.3  Additional Readings
      [42] PixelCNN - Explict likelihood model
      [43] Tutorial on VAE

      Sofware
       26  13/04/2022
      (16-18)
      Unsupervised and Generative Deep Learning II
      generative adversarial networks; adversarial autoencoders

      [DL] Section 20.10.4
       Additional Readings
      [44] Tutorial on GAN (here another online resource with GAN tips)
      [45] Wasserstein GAN
      [46] Tutorial on sampling neural networks
      [47] Progressive GAN
      [48] Cycle Gan
      [49] Seminal paper on Adversarial AEs

      Sofware

    • Reinforcement Learning (14h)

      We formalise the reinforcement learning problem by rooting it into Markov decision processes and we provide an overview of the main approaches to design reinforcement learning agents, including model-based, model-free, value and policy learning. We link classical approaches with modern deep learning based approximators (deep reinforcement learning). We overview the main programming frameworks available. Methodologies covered include: dynamic programming, MC learning, TD learning, SARSA, Q-learning, deep Q-learning, policy gradient and deep policy gradient, MC tree search. 


      Date Topic  References  
       Additional Material 
       28 21/04/2022
      (14-16)
      Reinforcement learning fundamentals
      reinforcement learning problems; environment; agent; actions and policies; taxonomy of approaches
       [RL] Chapter 1 Software
      Open AI gym for RL environments and tasks
       29 26/04/2022
      (14-16) 
      Markov Decision Processes
      formal model of RL probelms; rewards; returns; Bellman expectation and optimality
       [RL] Chapter 3  
       30 27/04/2022
      (14-16)
      Model-Based Planning
      dynamic programming; policy evaluation; policy iteration; value iteration
       [RL] Chapter 4
      Software
      Dynamic programming demo on Gridworld in Javascript (with code)
       31 28/04/2022
      (14-16) 
       Model-free reinforcement learning
      model-free predition; model-free control; Monte Carlo methods; TD learning;  SARSA; Q-learning
       
      [RL] Section 5.1-5.6, 6.1-6.6, 7.1, 7.2, 12.1, 12.2, 12.7
       Additional reading:
      [50] The original Q-learning paper

      Software:

        03/05/2022
      (14-16) 
       NO LECTURE    
        04/05/2022
      (16-18) 
      NO LECTURE    
       32 05/05/2022
      (14-16) 
       Value-function Approximation
      linear incremental methods; batch value function approximation; deep Q-learning; linear least-squares control
       [RL] Section 9.1-9.5, 9.8, 10.1, 11.1-11.5 Additional Reading:
      [51] Original DQN paper
      [52] Double Q-learning
      [53] Dueling Network Architectures
      [54] Prioritized Replay
       33 09/05/2022
      (16-18 - ROOM C
       Policy gradient methods  [RL] Chapter 13  Additional Reading:
      [55] Original REINFORCE paper
      [56] Learning with the actor-critic architecture
      [57] Accessible reference to natural policy gradient
      [58] A3C paper
      [59] Deep Deterministic Policy Gradient
      [60] TRPO paper
      [61] PPO paper
       34 10/05/2022
      (14-16) 
       Integrating Learning and Planning  [RL] Chapter 8, Sect 16.6  Additional Reading:
      [62] UCT paper: the introduction of Monte-Carlo planning
      [63] MoGo: the grandfather of AlphaGo (RL using offline and online experience)
      [64] AlphaGo paper
      [65] AlphaGo without human bootstrap

    • Advanced Topics and Applications (8h)

      The module covers some recent and interesting development and research topics in the field of machine learning. Topics choice is likely to vary at each edition. Example topics include: deep learning for graphs, learning with structured data, continual learning, distributed learning, learning-reasoning integration, edgeAI,etc.. The module concludes with a final lecture which discusses the course content retrospectively and details the exam modalities, topics and deadlines.

         Date  Topic  References     Additional Material 
       27  20/04/2022
      (16-18)
       Continual Learning - Guest lecture by Vincenzo Lomonaco

         
       35 11/05/2022
      (16-18)

      Deep learning for graphs
       
      Software
      - PyDGN: our in-house DLG library
      - PyTorch geometric
      - Deep graph library

      Additional readings
      [66-67] Seminal works on neural networks for graphs
      [68] Recent tutorial paper
       36
      12/05/2022
      (14-16)
      Final lecture

       37
      20/05/2022
      (11-13) ROOM D1
      Research seminars by Ph.D. students

      Draft programme:
      Andrea Cossu - The reasonable effectiveness of pre-trained models in Continual Learning
      Michele Resta - Continual Incremental Language Learning for Neural Machine Translation
      RIccardo Massidda - Ontology-Driven Semantic Alignment of Artificial Neurons
      Danilo Numeroso - Neural Algorithmic Reasoning
      Francesco Landolfi - Graph Pooling with Maximum Weight k-Independent Sets


    • Course Grading and Exams

      Typical course examination (for students attending the lectures) is performed in 2 stages: midterm assignments and an oral exam. Midterms waive the final project.

      Midterm Assignment

      Midterms consist in short assignments involving with one of the following tasks:

      • A quick and dirty (but working) implementation of a simple pattern recognition algorithm
      • A report concerning the experience of installing and running a demo application realized using available deep learning and machine learning libraries
      • A summary of a recent research paper on topics/models related to the course content.

      The midterms can consist in either the delivery of code (e.g. colab notebook) or a short slide deck (no more than 10 slides) presenting the key/most-interesting aspects of the assignment. 

      Students might be given some amount of freedom in the choice of assignments, pending a reasonable choice of the topic. The assignments will roughly be scheduled every 3/4 weeks.

      Oral Exam

      The oral examination will test knowledge of the course contents (models, algorithms and applications).

      Exam Grading (with Midterms)

      The final exam vote is given by the oral grade. The midterms only wave the final project but do not contribute to the grade. In other words you can only fail or pass a midterm. You need to pass all midterms in order to succesfully wave the final project.

      Alternative Exam Modality (No Midterms / Non attending students)

      Working students, those not attending lectures, those who have failed midterms or simply do not wish to do them, can complete the course by delivering a final project and an oral exam.  Final project topics will be released in the final weeks of the course: contact the instructor by mail to arrange choice of the topics once these are published.

      The final project concerns preparing a report on a topic relevant to the course content or the realization of a software implementing a non-trivial learning model and/or a PR application relevant for the course. The content of the final project will be discussed in front of the instructor and anybody interested during the oral examination. Students are expected to prepare slides for a 15 minutes presentation which should summarize the ideas, models and results in the report. The exposition should demonstrate a solid understanding of the main ideas in the report.

      Grade for this exam modality is determined as

       \( G = 0.5 \cdot (G_P + G_O) \)

      where \( G_P \in [1,32] \) is the project grade and \( G_O \in [1,30] \) is the oral grade

      • Midterms and Projects

      • Bibliography


        Bibliographic References

        1. Scott Krigg, Interest Point Detector and Feature Descriptor Survey, Computer Vision Metrics, pp 217-282, Open Access Chapter
        2. Tinne Tuytelaars and Krystian Mikolajczyk, Local Invariant Feature Detectors: A Survey, Foundations and Trends in Computer Graphics and Vision, Vol. 3, No. 3 (2007) 177–2, Online Version
        3. C. Glymour, Kun Zhang and P. Spirtes, Review of Causal Discovery Methods Based on Graphical Models Front. Genet. 2019, Online version
        4. Bacciu, D., Etchells, T. A., Lisboa, P. J., & Whittaker, J. (2013). Efficient identification of independence networks using mutual information. Computational Statistics, 28(2), 621-646, Online version
        5. Tsamardinos, I., Brown, L.E. & Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach Learn 65, 31–78 (2006), Online version
        6. Lawrence R. Rabiner:a tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989, pages 257-286, Online Version
        7. Charles Sutton and Andrew McCallum,  An Introduction to Conditional Random Fields, Arxiv
        8. Sebastian Nowozin and Christoph H. Lampert, Structured Learning and Prediction, Foundations and Trends in Computer Graphics and Vision, Online Version
        9. Philipp Krahenbuhl, Vladlen Koltun, Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, Proc.of NIPS 2011, Arxiv
        10. D. Blei, A. Y. Ng, M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 2003
        11. D. Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012, Free Online Version
        12. G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray. Visual Categorization with Bags of Keypoints. Workshop on Statistical Learning in Computer Vision. ECCV 2004, Free Online Version
        13. W. M. Darling, A Theoretical and Practical Implementation Tutorial on Topic Modeling and Gibbs Sampling, Lecture notes
        14. Geoffrey Hinton, A Practical Guide to Training Restricted Boltzmann Machines, Technical Report 2010-003, University of Toronto, 2010
        15. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel. Handwritten digit recognition with a back-propagation network, Advances in Neural Information Processing Systems, NIPS, 1989
        16. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, NIPS, 2012
        17. S. Simonyan and A. Zisserman.  Very deep convolutional networks for large-scale image recognition, ICLR 2015, Free Online Version
        18. C. Szegedy et al,  Going Deeper with Convolutions, CVPR 2015, Free Online Version
        19. K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. CVPR 2016, Free Online Version
        20. V. Dumoulin, F. Visin, A guide to convolution arithmetic for deep learning, Arxiv
        21. S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, ICML 2013,  Arxiv
        22. M.D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks, ICML 2013, Arxiv
        23. J. Adebayo et al, Sanity Checks for Saliency Maps, NeurIPS, 2018
        24. G.E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.Science 313.5786 (2006): 504-507, Free Online Version
        25. G.E. Hinton, R. R. Salakhutdinov. Deep Boltzmann Machines. AISTATS 2009, Free online version.
        26. R. R. Salakhutdinov. Learning Deep Generative Models, Annual Review of Statistics and Its Application, 2015, Free Online Version
        27. Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 35(8) (2013): 1798-1828, Arxiv.
        28. G. Alain, Y. Bengio. What Regularized Auto-Encoders Learn from the Data-Generating Distribution, JMLR, 2014.
        29. Y. Bengio, P. Simard and P. Frasconi, Learning long-term dependencies with gradient descent is difficult. TNN, 1994, Free Online Version
        30. S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, 1997, Free Online Version
        31. K. Greff et al, LSTM: A Search Space Odyssey, TNNLS 2016, Arxiv
        32. C. Kyunghyun et al, Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, EMNLP 2014, Arxiv
        33. N. Srivastava et al, Dropout: A Simple Way to Prevent Neural Networks from Overfitting, JLMR 2014
        34. Bahdanau et al, Neural machine translation by jointly learning to align and translate, ICLR 2015, Arxiv
        35. Xu et al, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, ICML 2015, Arxiv
        36. Koutník et al, A Clockwork RNN, ICML 2014, Arxiv
        37. Krueger, Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activation, ICLR 2018, Arxiv
        38. A. Vaswan et al, Attention Is All You Need, NIPS 2017, Arxiv
        39. Sukhbaatar et al, End-to-end Memory Networks, NIPS 2015, Arxiv
        40. A. Graves et al, Neural Turing Machines, Arxiv
        41. A. Graves, Adaptive Computation Time for Recurrent Neural Networks, Arxiv
        42. A. van der Oord et al., Pixel Recurrent Neural Networks, 2016, Arxiv
        43. C. Doersch, A Tutorial on Variational Autoencoders, 2016, Arxiv
        44. Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks, 2016, Arxiv
        45. Arjovsky et al, Wasserstein GAN, 2017, Arxiv
        46. T. White, Sampling Generative Network, NIPS 2016, Arxiv
        47. T. Karras et al, Progressive Growing of GANs for Improved Quality, Stability, and Variation, ICLR 2018, Arxiv
        48. Jun-Yan Zhu et al, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, ICCV 2017 Arxiv
        49. Alireza Makhzani et al, Adversarial Autoencoders, NIPS 2016, Arxiv
        50. CJCH Watkins, P Dayan, Q-learning, Machine Learning, 1992, PDF
        51. Mnih et al,Human-level control through deep reinforcement learning, Nature, 2015, PDF
        52. van Hasselt et al, Deep Reinforcement Learning with Double Q-learning, AAAI, 2015, PDF
        53. Wang et al, Dueling Network Architectures for Deep Reinforcement Learning, ICML, 2016, PDF
        54. Schaul et al, Prioritized Experience Replay, ICLR, 2016, PDF
        55. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Machine Learning, 1992, PDF
        56. Sutton et al, Policy gradient methods for reinforcement learning with function approximation, NIPS, 2000, PDF
        57. Peters & Schaal, Reinforcement learning of motor skills with policy gradients, Neural Networks, 2008, PDF
        58. Mnih et al, Asynchronous methods for deep reinforcement learning, ICLR, 2016, PDF
        59. Lillicrap et al., Continuous control with deep reinforcement learning, ICLR, 2016, PDF
        60. Schulman et al, Trust Region Policy Optimization, ICML, 2015, PDF
        61. Schulman et al, Proximal Policy Optimization Algorithms, Arxiv
        62. Kocsis and Szepesvari, Bandit based Monte-Carlo planning, ECML, 2006, PDF
        63. Gelly and Silver, Combining Online and Offline Knowledge in UCT, ICML, 2017, PDF
        64. Silver et al, Mastering the game of Go with deep neural networks and tree search, Nature, 2016, Online
        65. Silver et al, Mastering the game of Go without human knowledge, Nature, 2017, Online
        66. A. Micheli, Neural Network for Graphs: A Contextual Constructive Approach. IEEE TNN, 2009, Online
        67. Scarselli et al, The graph neural network model, IEEE TNN, 2009, Online
        68. Bacciu et al, A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020, Arxiv