Section outline
-
These lecture notes are still partial and under active preparation. Reload often. Reports of errors, typos, omissions and suggestions for improvement are highly welcome.
Part I: A Gentle Introduction
1 Simple Optimization Problems
1.2 (Outrageously) Simple (Univariate) Optimization
1.3 (Not always) Simple Multivariate Optimization
1.4 Multivariate Quadratic optimization: Gradient Method .
1.5 The Conjugate Gradient Method
1.6 Multivariate Quadratic optimization: a Direct Method
1.7 Ex-post motivation: Polynomial Interpolation
1.8 Wrapup
1.9 SolutionsPart II: Unconstrained Optimization
2 Univariate Optimization
2.1 General Univariate Optimization Problems
2.2 Lipschitz (Global) Optimization
2.3 Local optimization
2.4 First local optimization algorithms
2.5 Towards faster local optimization algorithms
2.6 Dichotomic Search
2.7 Newton's method
2.8 A Fleeting Glimpse to Global Optimization
2.9 Wrapup
2.10 Solutions3 Unconstrained Multivariate Optimality and Convexity
3.1 Unconstrained Multivariate Optimization
3.2 Gradients, Jacobians,and Hessians
3.3 Optimality conditions
3.4 A Quick Look to Convex Functions
3.5 Ex-postMotivation: (Artificial, Deep) Neural Networks
3.6 Solutions4 Smooth Unconstrained Optimization
5 Nonsmooth Unconstrained Optimization
Part III: Constrained Optimization
6 Constrained Optimality and Duality
7 Constrained Optimization
Part IV: Combinatorial Optimization
8 A Fleeting Glimpse to Combinatorial Optimization
Part V: Supplementary Material
References
A Miscellaneous Mathematical Background
A.1 Infima, suprema and R
A.2 Vector space, scalar product
A.3 Matrices, transpose, symmetry, products
A.4 Eigenvalues and the determinant, in practice
A.5 Limits and optimization
A.6 Continuity
A.7 (Univariate) Derivatives
A.8 Topology and limit in Rn
A.9 Gradients, Jacobians and Hessians
A.10 Topology and feasibility