Lecture 12.1: "extremely inexact LS": fixed stepsize
Completion requirements
"Extremely inexact LS": Fixed Stepsize. Analysis of the approach and the necessary prerequisites (L-smooth, \tau-convex). Convergence and efficiency results. MATLAB implementation of the gradient method for quadratic functions with fixed stepsize. Running the code with on different test instances, observing the properties: the "exact" optimal fixed stepsize does work, but straying even a bit from that worsens things considerably, and a factor of two will kill you. Take away: fixed stepsize possible, possibly not too bad, but not ideal.