Section outline


    1. Scott Krigg, Interest Point Detector and Feature Descriptor Survey, Computer Vision Metrics, pp 217-282, Open Access Chapter
    2. Tinne Tuytelaars and Krystian Mikolajczyk, Local Invariant Feature Detectors: A Survey, Foundations and Trends in Computer Graphics and Vision, Vol. 3, No. 3 (2007) 177–2, Online Version
    3. C. Glymour, Kun Zhang and P. Spirtes, Review of Causal Discovery Methods Based on Graphical Models Front. Genet. 2019, Online version
    4. Bacciu, D., Etchells, T. A., Lisboa, P. J., & Whittaker, J. (2013). Efficient identification of independence networks using mutual information. Computational Statistics, 28(2), 621-646, Online version
    5. Tsamardinos, I., Brown, L.E. & Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach Learn 65, 31–78 (2006), Online version
    6. Lawrence R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989, pages 257-286, Online Version
    7. Charles Sutton and Andrew McCallum,  An Introduction to Conditional Random Fields, Arxiv
    8. Sebastian Nowozin and Christoph H. Lampert, Structured Learning and Prediction, Foundations and Trends in Computer Graphics and Vision, Online Version
    9. Philipp Krahenbuhl, Vladlen Koltun, Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, Proc.of NIPS 2011, Arxiv
    10. D. Blei, A. Y. Ng, M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 2003
    11. D. Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012, Free Online Version
    12. G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray. Visual Categorization with Bags of Keypoints. Workshop on Statistical Learning in Computer Vision. ECCV 2004, Free Online Version
    13. W. M. Darling, A Theoretical and Practical Implementation Tutorial on Topic Modeling and Gibbs Sampling, Lecture notes
    14. Geoffrey Hinton, A Practical Guide to Training Restricted Boltzmann Machines, Technical Report 2010-003, University of Toronto, 2010
    15. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel. Handwritten digit recognition with a back-propagation network, Advances in Neural Information Processing Systems, NIPS, 1989
    16. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, NIPS, 2012
    17. S. Simonyan and A. Zisserman.  Very deep convolutional networks for large-scale image recognition, ICLR 2015, Free Online Version
    18. C. Szegedy et al,  Going Deeper with Convolutions, CVPR 2015, Free Online Version
    19. K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. CVPR 2016, Free Online Version
    20. V. Dumoulin, F. Visin, A guide to convolution arithmetic for deep learning, Arxiv
    21. S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, ICML 2013,  Arxiv
    22. M.D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks, ICML 2013, Arxiv
    23. J. Adebayo et al, Sanity Checks for Saliency Maps, NeurIPS, 2018
    24. G.E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.Science 313.5786 (2006): 504-507, Free Online Version
    25. G.E. Hinton, R. R. Salakhutdinov. Deep Boltzmann Machines. AISTATS 2009, Free online version.
    26. R. R. Salakhutdinov. Learning Deep Generative Models, Annual Review of Statistics and Its Application, 2015, Free Online Version
    27. Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 35(8) (2013): 1798-1828, Arxiv.
    28. G. Alain, Y. Bengio. What Regularized Auto-Encoders Learn from the Data-Generating Distribution, JMLR, 2014.
    29. Y. Bengio, P. Simard and P. Frasconi, Learning long-term dependencies with gradient descent is difficult. TNN, 1994, Free Online Version
    30. S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, 1997, Free Online Version
    31. K. Greff et al, LSTM: A Search Space Odyssey, TNNLS 2016, Arxiv
    32. C. Kyunghyun et al, Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, EMNLP 2014, Arxiv
    33. N. Srivastava et al, Dropout: A Simple Way to Prevent Neural Networks from Overfitting, JLMR 2014
    34. Bahdanau et al, Neural machine translation by jointly learning to align and translate, ICLR 2015, Arxiv
    35. Xu et al, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, ICML 2015, Arxiv
    36. A. Vaswan et al, Attention Is All You Need, NIPS 2017, Arxiv
    37. A. Dosovitskiy et al,  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021
    38. A. van der Oord et al., Pixel Recurrent Neural Networks, 2016, Arxiv
    39. C. Doersch, A Tutorial on Variational Autoencoders, 2016, Arxiv
    40. Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks, 2016, Arxiv
    41. Arjovsky et al, Wasserstein GAN, 2017, Arxiv
    42. T. White, Sampling Generative Network, NIPS 2016, Arxiv
    43. T. Karras et al, Progressive Growing of GANs for Improved Quality, Stability, and Variation, ICLR 2018, Arxiv
    44. Jun-Yan Zhu et al, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, ICCV 2017 Arxiv
    45. Alireza Makhzani et al, Adversarial Autoencoders, NIPS 2016, Arxiv
    46. Ling Yang et al, Diffusion Models: A Comprehensive Survey of Methods and Applications, 2023, Arxiv
    47. Jascha Sohl-Dickstein et al, Deep Unsupervised Learning using Nonequilibrium Thermodynamics, ICML 2015, PDF
    48. Y. Song & S. Ermon, Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2019, PDF
    49. Jonathan Ho et al, Denoising Diffusion Probabilistic Models, NeurIPS 2020, Arxiv
    50. P. Dhariwal & A. Nichol, Diffusion Models Beat GANs on Image Synthesis, NeurIPS 2021, PDF 
    51. I. Kobyzev et al Normalizing Flows: An Introduction and Review of Current Methods, Arxiv
    52. L Dinh et al, Density Estimation using real NVP, ICLR 2017, PDF
    53. D. Kingma & P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, NeurIPS 2018, PDF
    54. G. Papamakarios et al, Masked Autoregressive Flow for Density Estimation, NeurIPS 2017, PDF
    55. A. Micheli, Neural Network for Graphs: A Contextual Constructive Approach. IEEE TNN, 2009, Online
    56. Scarselli et al, The graph neural network model, IEEE TNN, 2009, Online
    57. Bacciu et al, A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020, Arxiv
    58. Bacciu et al, Generalizing downsampling from regular data to graphs, AAAI, 2023, PDF
    59. Bacciu et al,  Probabilistic Learning on Graphs via Contextual Architectures, 2020, JMLR
    60. Gravina et al, ANTI-SYMMETRIC DGN: A STABLE ARCHITECTURE FOR DEEP GRAPH NETWORKS, ICLR, 2023, Arxiv
    61. A. Gravina and D. Bacciu, Deep learning for dynamic graphs: models and benchmarks, 2024, TNNLS
    62. Numeroso et al, Dual Algorithmic Reasoning, ICRL, 2023, Arxiv
    63. CJCH Watkins, P Dayan, Q-learning, Machine Learning, 1992, PDF
    64. Mnih et al,Human-level control through deep reinforcement learning, Nature, 2015, PDF
    65. Sutton et al, Policy gradient methods for reinforcement learning with function approximation, NIPS, 2000, PDF
    66. Schulman et al, Trust Region Policy Optimization, ICML, 2015, PDF