Skip to main content
English (en)
English (en)
Italiano (it)
You are currently using guest access (
Log in
)
Computational Mathematics for Learning and Data Analysis - AA 2024/25
Home
Courses
Corso di Laurea Magistrale in Informatica (LM-18)
CM24
Lecture Recordings: Numerical Linear Algebra
l11
l11
Completion requirements
l11
◄ 2024-11-13: Tikhonov regularization. Condition number. The condition number of solving linear equations and least-squares problems.
Jump to...
Jump to...
Annunci
Analysis, Convexity, and Optimization (Pinkham, 2014)
0-information
1-linalg
2-orthogonality
3-intro-leastsquares
4-leastsquares-normal
5-CG
6 - SVD
7-matrixnorm
8-lab-SVD
9-QR
10-leastsquares-QR
11-leastsquares-SVD
12-conditioning
13-conditioning-least-squares
14-stability
15-stability-and-residual
16-stability-least-squares
17-arnoldi
18-GMRES
19-lu
20-chol
21-largescale-examples
0- Introduction to the course, motivation, mindset
1 - Simple optimization problems
2 - Univariate Optimization
3 - Unconstraned Multivariate Optimality and Convexity
4 - Smoot Unconstrained Optimization
5 - Nonsmooth Unconstrained Optimization
6 - Constrained Optimality and Duality
7 - Constrained Optimization
Optimization & Learnng Lecture Notes
2024-09-19: Recap of linear algebra: linear combinations, matrix products, coordinates
l1
2024-09-20: Orthogonality, eigenvectors, positive definiteness and semidefiniteness
l2
2024-09-27: introduction to least squares problems. Some applications: linear estimation, polynomial fitting. Uniqueness of solution. Method of normal equations. Pseudoinverse.
l3
2024-10-02: Singular value decomposition. Matrix norms. Eckhart-Young theorem (statement)
l4
2024-10-04: sparse matrices. Conjugate gradient: introduction, subspace optimality properties. Krylov subspaces and their relation to gradient-type methods.
l5
2024-10-16: Q-norm; orthogonality and convergence properties of CG (convergence in terms of polynomial approximation, worst-case bound, both without proof)
l6
2024-10-18: convergence of CG and polynomial approximation. Data analysis with the SVD: images, student scores, text (latent semantic analysis)
l7
2024-10-25: dimensionality reduction with PCA; PCA of the Yale faces dataset, used also for image recognition
l8
2024-10-30: Householder reflectors. QR factorization. Different ways to handle Q: thin QR, returning the Householder vectors.
l9
2024-11-08: solving least-squares problems with the QR factorization and with the SVD. Singular least squares problems. The effect of noise; regularization via truncated SVD.
l10
2024-11-13: Tikhonov regularization. Condition number. The condition number of solving linear equations and least-squares problems.
2024-11-15: stability of floating point computations. Backward stability, with an example. A posteriori stability tests for linear systems and LS problems.
l12
2024-11-22: backward stability of QR factorization. Backward stability properties of algorithms to solve LS problems and their comparison. Introduction to the Arnoldi algorithm.
l13
2024-11-27: Arnoldi algorithm, GMRES. Convergence, computational and implementation remarks. The symmetric version: MINRES.
l14
2024-12-06: overview of direct methods for linear systems: LU, LDL, Cholesky, remarks about sparsity
l15
2024-12-16: examples of solutions of large-scale linear systems. Reordering in direct methods. A quick introduction to preconditioners for iterative methods.
l16
Lecture 1.1 - introduction to the course
Lecture 1.2 - motivation for the course: four examples
Lecture 2.1: general notions of optimization
Lecture 2.2: starting very very easy and very slowly ramping up
Lecture 3.1: multivariate optimization: initial concepts, easy functions
Lecture 3.2: "real" quadratic functions and how they work
Lecture 4.1: quadratic optimization: from optimality conditions to the gradient method
Lecture 4.2: the gradient method for quadratic functions, practice
Lecture 5.1: convergence rates: from the gradient method to the world
Lecture 5.2: sublinear convergence and where this leads us
Lecture 6.1: optimizing more general functions, but univariate ones
Lecture 6.2: first steps with local optimization: the role of derivatives
Lecture 7.1: dichotomic search, from naive to model-based
Lecture 7.2: faster local optimization and the role of models
Lecture 8.1: closing thoughts of univariate optimization, a fleeting glimpse to the global case
Lecture 8.2: theory of gradients and Hessians towards optimality conditions
Lecture 9.1: local first- and second-order optimality conditions (necessary and sufficient), convexity in \R^n
Lecture 10.1: the gradient method with "exact" line search
Lecture 10.2: inexact line search, the Armijo-Wolfe conditions
Lecture 11.1: convergence with the A-W LS, theory
Lecture 11.2: the A-W LS in practice
Lecture 12.1: "extremely inexact LS": fixed stepsize
Lecture 12.2: gradient twisting approaches at their best: Newton's method
Lecture 13.1: all around Newton's method
Lecture 13.2: towards the very-large-scale, quasi-Newton methods
Lecture 14.1: deflected gradient methods I - Conjugate Gradient
Lecture 14.2: deflected gradient methods II - Heavy Ball
Lecture 15.1: the scary world of nondifferentiable optimization
Lecture 15.2: (convex) nondifferentiable optimization, converging against all odds
Lecture 16.1: better nondifferentiable approachess, as far as they can go
Lecture 16.2: first steps on constrained optimization
Lecture 17.1: algebraic representation of feasible sets, i.e., constraints
Lecture 17.2: from the KKT conditions to duality
Lecture 18.1: first step in constrained optimization
Lecture 18.2: more (projected gradient) steps in constrained optimization
Lecture 19.1: from Frank-Wolfe to the dual method
Lecture 19.2: ending with a bang: the (primal-dual) interior-point method
NBA salaries dataset
Yale "eigenfaces" dataset + supporting scripts
A small utlity to generate "interesting" quadratic functions
A small utility to plot a quadratic function
A small utlity to compute the optimal value of a quadratic function
The Gradient Method for Quadratic Functions
Some one-dimensional test functions
The Dichotomic Search Method for univariate optimization
Newton's Method for univariate optimization
Some multivariate test functions
The Steepest Descent (Gradient) method for general nonlinear functions
Newton's Method
The BFGS quasi-Newton Method
The Nonlinear Conjugate Gradient Method(s)
The Heavy Ball Method
The Subgradient Method
The Proximal Bundle method
A small utility to create box-constrained quadratic programs
An off-the-shelf solver for box-constrained quadratic programs
The Active-Set Method for box-constrained quadratic programs
The Projected Gradient Method for box-constrained quadratic programs
Frank-Wolfe's Method for box-constrained quadratic programs
The Dual Method for box-constrained quadratic programs
The Primal-Dual Interior-Point Method for box-constrained quadratic programs
Information about didactic projects
Project tracks
2024-11-15: stability of floating point computations. Backward stability, with an example. A posteriori stability tests for linear systems and LS problems. ►