Lecture 7.2: faster local optimization and the role of models
The quadratic model can be wrong, hence never unconditionally trust it: "regularization" = safeguarding. Hands-on with the complete version of Dichotomic Search (quadratic interpolation + safeguarding), and it really is faster. Theory of safeguarded Dichotomic Search with quadratic and cubic interpolation: the more information, the faster the convergence. Fastest (univariate) optimization requires second derivatives: Newton's Method, the (very) good and the (rather) bad. Hands-on with Newton's Method, different convergence (but always fast "in the tail" depending on the starting point). The role of the second-order model, that can be very bad (but when it is good, is very good). A quick look at the convergence proof of Newton's method, where the \delta constant comes from and why you cannot really measure it in practice.