Projects without a Machine Learning part

  1. Given a symmetric, positive definite matrix $A \in \mathbb{R}^{n\times n}$, consider the problem

$$\min \frac{x^T A x}{x^T x} \,\colon\, x \in \mathbb{R}^n \setminus {0}$$

What is the exact solution of this problem? Compute the minimum using

using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Given a symmetric, positive definite matrix $A \in \mathbb{R}^{n\times n}$, consider the problem

    $$\min \frac{x^T A x}{x^T x} \,\colon\, x \in \mathbb{R}^n \setminus {0}$$

What is the exact solution of this problem? Compute the minimum using:

using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Consider the convex quadratic program on a set of $k$ disjoint simplices:

$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$

where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm based on gradient projection [reference: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Consider the convex quadratic program on a set of $k$ disjoint simplices:

$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$

where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of active-set methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Consider the convex quadratic program on a set of $k$ disjoint simplices:

$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$

where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of interior-point methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Consider the convex quadratic program on a set of $k$ disjoint simplices:

$$ \min { \, x^T Q X + q x \,\colon\, x \geq 0, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$

where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of conditional gradient (Frank-Wolfe type) [references: https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the LP within the algorithm - but do consider developing ad-hoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Consider the convex quadratic separable Min-Cost Flow Problem

    $$\min { \, x^T Q X + q x \,\colon\, Ex = b \quad,\quad 0 \leq x \leq u \, }$$

where E is the node-arc incidence matrix of a directed connected graph G = (N, A), and Q is diagonal and Positive Semidefinite (i.e., D >= 0). Implement a solution algorithm of the class of active-set methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Consider the convex quadratic separable Min-Cost Flow Problem

    $$\min { \, x^T Q X + q x \,\colon\, Ex = b \quad,\quad 0 \leq x \leq u \, }$$

where E is the node-arc incidence matrix of a directed connected graph G = (N, A), and Q is diagonal and Positive Semidefinite (i.e., D >= 0). Implement a solution algorithm of the class of interior-point methods [references: J. Nocedal, S. Wright, Numerical Optimization] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for permuting the rows of the constraint matrix to avoid fill-in, but do consider developing ad-hoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:


  1. Solve the linear least squares problem

    $$\min_w \|\hat{X}w-y\|$$

where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...

Use the following methods for the task:

You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.

You may use the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Solve the linear least squares problem

    $$\min_w \|\hat{X}w-y\|$$

where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...

Use the following methods for the task:

You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.

You may use the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Solve the linear least squares problem

    $$\min_w \|\hat{X}w-y\|$$

where X, y are the data and output matrices from the ML cup https://elearning.di.unipi.it/mod/folder/view.php?id=3615, and \hat{X} is the matrix obtained by augmenting X with a few functions of your choice of the features of the dataset. Example: if X contains column [x1, x2], you may add functions such as log(x1), x1.^2, x1.*x2...

Use the following methods for the task:

You should implement yourself this thin QR factorization, in particular. The computational cost of your implementation should scale linearly with the largest dimension of the matrix X.

You may use the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Consider the convex quadratic program on a set of $k$ disjoint simplices:

$$ \min { \, x^T Q X + q x \,\colon\, \sum_{i \in I^h} x_i = 1 \quad h = 1, \dots, k \, } $$

where $x \in \mathbb{R}^n$ and the index sets $I^h, i = 1, \dots, k$ form a partition of the set { 1, ... , n }, and Q is Positive Semidefinite. Implement a solution algorithm of the class of conditional gradient (Frank-Wolfe type) [references: https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm] using the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libreries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the LP within the algorithm - but do consider developing or at least using ad-hoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project.

Required output of the project are:

  1. Solve a sparse linear system of the form $$ \begin{bmatrix} D & E^T\\ E & 0 \end{bmatrix} \begin{bmatrix} x\\y \end{bmatrix} = \begin{bmatrix} b\\c \end{bmatrix} $$

where D is a diagonal matrix, and E is the edge-node adjacency matrix of a graph. These problems arise as the KKT system of certain quadratic flow optimization problems on graphs.

Use the following method for the task:

You may use the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are:


  1. Solve a sparse linear system of the form

$$ED^{-1}E^T x = b$$

where D is a diagonal matrix, and E is the edge-node adjacency matrix of a graph. These problems arise from solving the KKT system of certain quadratic flow optimization problems on graphs.

Use the following method for the task:

You may use the programming language of your choice (C/C++, Python, Matlab) but no ready-to-use optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).

Required output of the project are: