Projects without a Machine Learning part
 Given a symmetric, positive definite matrix $A \in \mathbb{R}^{n\times n}$, consider the problem
$$\min \frac{x^T A x}{x^T x} \,\colon\, x \in \mathbb{R}^n \setminus {0}$$
What is the exact solution of this problem? Compute the minimum using
 a standard gradient descent (steepest descent) approach.
 a conjugate gradient descent algorithm [reference: J. Nocedal, S. Wright, Numerical Optimization].
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (for instance, dense and sparse matrices)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, if any, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools (for instance:
eig
oreigs
, depending on the matrix, and/or the inverse power method), factoring in elements like difference of programming language if necessary 
optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Given a symmetric, positive definite matrix $A \in \mathbb{R}^{n\times n}$, consider the problem
$$\min \frac{x^T A x}{x^T x} \,\colon\, x \in \mathbb{R}^n \setminus {0}$$
What is the exact solution of this problem? Compute the minimum using:
 a standard gradient descent (steepest descent) approach.
 a quasiNewton method such as BFGS (one which does not require any approximations of the Hessian of the function). [references: J. Nocedal, S. Wright, Numerical Optimization]
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (for instance, dense and sparse matrices)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools (for instance:
eig
oreigs
, depending on the matrix, and/or the inverse power method), factoring in elements like difference of programming language if necessary 
optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Consider the convex quadratic program on a set of $k$ disjoint simplices:
$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$
where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm based on gradient projection [reference: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Consider the convex quadratic program on a set of $k$ disjoint simplices:
$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$
where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of activeset methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Consider the convex quadratic program on a set of $k$ disjoint simplices:
$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$
where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of interiorpoint methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Consider the convex quadratic program on a set of $k$ disjoint simplices:
$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$
where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of conditional gradient (FrankWolfe type) [references: https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the LP within the algorithm  but do consider developing adhoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Consider the convex quadratic separable MinCost Flow Problem
$$\min { \, x^T Q X + q x \,\colon\, Ex = b \quad,\quad 0 \leq x \leq u \, }$$
where E is the nodearc incidence matrix of a directed connected graph G = (N, A), and Q is diagonal and Positive Semidefinite (i.e., D >= 0). Implement a solution algorithm of the class of activeset methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (different topologies of G, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Consider the convex quadratic separable MinCost Flow Problem
$$\min { \, x^T Q X + q x \,\colon\, Ex = b \quad,\quad 0 \leq x \leq u \, }$$
where E is the nodearc incidence matrix of a directed connected graph G = (N, A), and Q is diagonal and Positive Semidefinite (i.e., D >= 0). Implement a solution algorithm of the class of interiorpoint methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for permuting the rows of the constraint matrix to avoid fillin, but do consider developing adhoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (different topologies of G, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Solve the linear least squares problem
$$\min_w \\hat{X}wy\$$
where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...
Use the following methods for the task:
 an algorithm of the class of Conjugate Gradient methods [references: J. Nocedal, S. Wright, Numerical Optimization]
 Thin QR factorization with Householder reflectors [Trefethen, Bau, Numerical Linear Algebra, Lecture 10], in the variant where one does not form the matrix Q, but store the Householder vectors u_k and use them to perform (implicitly) products with Q and Q^T.
You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.
You may use the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of the behavior of the approach on the test instances, evaluating computational complexity and running time. Consider both the case of a single problem and several linear least squares problems with the same matrix A

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of the possible ways to improve the performance


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Solve the linear least squares problem
$$\min_w \\hat{X}wy\$$
where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...
Use the following methods for the task:
 an algorithm of the class of limitedmemory quasiNewton methods [references: J. Nocedal, S. Wright, Numerical Optimization, https://arxiv.org/abs/1406.2572]
 Thin QR factorization with Householder reflectors [Trefethen, Bau, Numerical Linear Algebra, Lecture 10], in the variant where one forms the columns of Q1 only, not all those of Q.
You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.
You may use the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of the behavior of the approach on the test instances, evaluating computational complexity and running time. Consider both the case of a single problem and several linear least squares problems with the same matrix A

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of the possible ways to improve the performance


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files

Solve the linear least squares problem
$$\min_w \\hat{X}wy\$$
where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...
Use the following methods for the task:
 a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]
 an algorithm of the class of accelerated gradient methods [reference: https://www.cs.cmu.edu/~ggordon/10725F12/slides/09acceleration.pdf, https://arxiv.org/pdf/1412.6980.pdf]
 Thin QR factorization with Householder reflectors [Trefethen, Bau, Numerical Linear Algebra, Lecture 10], in the variant where one does not form the matrix Q, but store the Householder vectors u_k and use them to perform (implicitly) products with Q and Q^T.
You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.
You may use the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (sparsity/rank of Q, ...)

the description of the behavior of the approach on the test instances, evaluating computational complexity and running time. Consider both the case of a single problem and several linear least squares problems with the same matrix A

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of the possible ways to improve the performance


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Consider the convex quadratic program on a set of $k$ disjoint simplices:
$$ \min { \, x^T Q X + q x \,\colon\, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$
where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of conditional gradient (FrankWolfe type) [references: https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm] using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the LP within the algorithm  but do consider developing or at least using adhoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics (different topologies of G, ...)

the description of experiments aimed at finding the best algorithmic parameters, if any, for solving the problems at hand

the description of the behavior of the approach on the test instances with the best algorithmic parameters, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of what the performed experiments suggest in term of viability of the approach (if it can ever be competitive, and for what kind of instances) and possible ways to improve the performances


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Solve a sparse linear system of the form $$ \begin{bmatrix} D & E^T\\ E & 0 \end{bmatrix} \begin{bmatrix} x\\y \end{bmatrix} = \begin{bmatrix} b\\c \end{bmatrix} $$
where D is a diagonal matrix, and E is the edgenode adjacency matrix of a graph. These problems arise as the KKT system of certain quadratic flow optimization problems on graphs.
Use the following method for the task:
 GMRES [Reference: Trefethen, Bau, Lecture 35]. At each step, solve the inner linear system with matrix H_k using a specialized version of QR factorization for Hessenberg matrices. Aim for a computational cost O(k^2) for solving this inner linear system.
You may use the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics.

the description of the behavior of the approach on the test instances, evaluating computational complexity and running time.

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of the possible ways to improve the performance


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Solve a sparse linear system of the form
$$ED^{1}E^T x = b$$
where D is a diagonal matrix, and E is the edgenode adjacency matrix of a graph. These problems arise from solving the KKT system of certain quadratic flow optimization problems on graphs.
Use the following method for the task:
 Conjugate Gradient for linear systems [Reference: Trefethen, Bau, Lecture 38]. Your implementation should not require forming the matrix $ED^{1}E$ explicitly, but rely on a callback function which computes the product $ED^{1}Ev$ for a given vector $v$.
You may use the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the problem to be solved

the implemented solution method, with a discussion of all relevant details

a description on how a reasonable test set of instances has been generated/gathered from the internet, where the instances must cover a reasonable span of sizes and possibly other characteristics.

the description of the behavior of the approach on the test instances, evaluating computational complexity and running time.

a comparison with efficiency and effectiveness of available offtheshelf tools, factoring in elements like difference of programming language if necessary

optionally, some analysis of the possible ways to improve the performance


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files