Vai al contenuto principale
Italiano (it)
English (en)
Italiano (it)
Ospite (
Login
)
Computational Mathematics for Learning and Data Analysis - AA 2024/25
Home
Corsi
Corso di Laurea Magistrale in Informatica (LM-18)
CM24
Lecture Recordings: Numerical Linear Algebra
l8
l8
Aggregazione dei criteri
l8
◄ 2024-10-25: dimensionality reduction with PCA; PCA of the Yale faces dataset, used also for image recognition
Vai a...
Vai a...
Annunci
Analysis, Convexity, and Optimization (Pinkham, 2014)
0-information
1-linalg
2-orthogonality
3-intro-leastsquares
4-leastsquares-normal
5-CG
6 - SVD
7-matrixnorm
8-lab-SVD
9-QR
10-leastsquares-QR
11-leastsquares-SVD
12-conditioning
13-conditioning-least-squares
14-stability
15-stability-and-residual
16-stability-least-squares
0- Introduction to the course, motivation, mindset
1 - Simple optimization problems
2 - Univariate Optimization
3 - Unconstraned Multivariate Optimality and Convexity
4 - Smoot Unconstrained Optimization
Optimization & Learnng Lecture Notes
2024-09-19: Recap of linear algebra: linear combinations, matrix products, coordinates
l1
2024-09-20: Orthogonality, eigenvectors, positive definiteness and semidefiniteness
l2
2024-09-27: introduction to least squares problems. Some applications: linear estimation, polynomial fitting. Uniqueness of solution. Method of normal equations. Pseudoinverse.
l3
2024-10-02: Singular value decomposition. Matrix norms. Eckhart-Young theorem (statement)
l4
2024-10-04: sparse matrices. Conjugate gradient: introduction, subspace optimality properties. Krylov subspaces and their relation to gradient-type methods.
l5
2024-10-16: Q-norm; orthogonality and convergence properties of CG (convergence in terms of polynomial approximation, worst-case bound, both without proof)
l6
2024-10-18: convergence of CG and polynomial approximation. Data analysis with the SVD: images, student scores, text (latent semantic analysis)
l7
2024-10-25: dimensionality reduction with PCA; PCA of the Yale faces dataset, used also for image recognition
2024-10-30: Householder reflectors. QR factorization. Different ways to handle Q: thin QR, returning the Householder vectors.
l9
2024-11-08: solving least-squares problems with the QR factorization and with the SVD. Singular least squares problems. The effect of noise; regularization via truncated SVD.
l10
2024-11-13: Tikhonov regularization. Condition number. The condition number of solving linear equations and least-squares problems.
l11
2024-11-15: stability of floating point computations. Backward stability, with an example. A posteriori stability tests for linear systems and LS problems.
l12
Lecture 1.1 - introduction to the course
Lecture 1.2 - motivation for the course: four examples
Lecture 2.1: general notions of optimization
Lecture 2.2: starting very very easy and very slowly ramping up
Lecture 3.1: multivariate optimization: initial concepts, easy functions
Lecture 3.2: "real" quadratic functions and how they work
Lecture 4.1: quadratic optimization: from optimality conditions to the gradient method
Lecture 4.2: the gradient method for quadratic functions, practice
Lecture 5.1: convergence rates: from the gradient method to the world
Lecture 5.2: sublinear convergence and where this leads us
Lecture 6.1: optimizing more general functions, but univariate ones
Lecture 6.2: first steps with local optimization: the role of derivatives
Lecture 7.1: dichotomic search, from naive to model-based
Lecture 7.2: faster local optimization and the role of models
Lecture 8.1: closing thoughts of univariate optimization, a fleeting glimpse to the global case
Lecture 8.2: theory of gradients and Hessians towards optimality conditions
Lecture 9.1: local first- and second-order optimality conditions (necessary and sufficient), convexity in \R^n
Lecture 10.1: the gradient method with "exact" line search
Lecture 10.2: inexact line search, the Armijo-Wolfe conditions
Lecture 11.1: convergence with the A-W LS, theory
Lecture 11.2: the A-W LS in practice
Lecture 12.1: "extremely inexact LS": fixed stepsize
Lecture 12.2: gradient twisting approaches at their best: Newton's method
Lecture 13.1: all around Newton's method
Lecture 13.2: towards the very-large-scale, quasi-Newton methods
Lecture 14.1: deflected gradient methods I - Conjugate Gradient
Lecture 14.2: deflected gradient methods II - Heavy Ball
NBA salaries dataset
Yale "eigenfaces" dataset + supporting scripts
A small utlity to generate "interesting" quadratic functions
A small utility to plot a quadratic function
A small utlity to compute the optimal value of a quadratic function
The Gradient Method for Quadratic Functions
Some one-dimensional test functions
The Dichotomic Search Method for univariate optimization
Newton's Method for univariate optimization
Some multivariate test functions
The Steepest Descent (Gradient) method for general nonlinear functions
Newton's Method
The BFGS quasi-Newton Method
The Nonlinear Conjugate Gradient Method(s)
The Heavy Ball Method
Information about didactic projects
Project tracks
2024-10-30: Householder reflectors. QR factorization. Different ways to handle Q: thin QR, returning the Householder vectors. ►