Projects associated with the Machine Learning course
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a NN with topology and activation function of your choice (provided it is differentiable). Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of Conjugate Gradient methods [references: J. Nocedal, S. Wright, Numerical Optimization, http://www.sciencedirect.com/science/article/pii/S0096300313007558]
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, line search used, algorithmic parameters and their setting) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (comprised different CG formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a NN with topology and activation function of your choice (provided it is differentiable). Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of limitedmemory quasiNewton methods [references: J. Nocedal, S. Wright, Numerical Optimization, http://www.sciencedirect.com/science/article/pii/0098135495002286, https://arxiv.org/abs/1406.2572]
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, line search used, algorithmic parameters and their setting) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (comprised different quasiNewton formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with

a NN with topology and activation function of your choice (provided it is differentiable)

a standard linear regression (min last square)
Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of accelerated gradient methods [reference: https://www.cs.cmu.edu/~ggordon/10725F12/slides/09acceleration.pdf, http://www.cs.toronto.edu/~adeandrade/assets/aconntmftc.pdf, https://arxiv.org/pdf/1412.6980.pdf]
and
 a basic version of the linear least squares solver of your choice.
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, line search used, algorithmic parameters and their setting) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

a discussion of which method among the ones seen during the lectures (normal equations, with which inner solution method, QR, SVD) rates to be more effective for the linear least square problem, on grounds of stability and computational cost

the description of experiments aimed at finding the best algorithmic parameters (comprised different accelerated gradient formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

the comparison of the behavior of the implemented min least square approach with the offtheshelf implementation available in your programming language in terms of speed and accuracy

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the linear regression and the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with

a NN with topology and activation function of your choice (provided it is differentiable), but mandatory L_1 regularization

a standard linear regression (min last square)
Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of accelerated gradient methods [reference: https://www.cs.cmu.edu/~ggordon/10725F12/slides/09acceleration.pdf, http://www.cs.toronto.edu/~adeandrade/assets/aconntmftc.pdf, https://arxiv.org/pdf/1412.6980.pdf]
and
 a basic version of the linear least squares solver of your choice
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, algorithmic parameters and their setting, ...) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

a discussion of which method among the ones seen during the lectures (normal equations, with which inner solution method, QR, SVD) rates to be more effective for the linear least square problem, on grounds of stability and computational cost

the description of experiments aimed at finding the best algorithmic parameters (comprised different accelerated gradient formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

the comparison of the behavior of the implemented min least square approach with the offtheshelf implementation available in your programming language in terms of speed and accuracy

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the linear regression and the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with

a NN with topology and activation function of your choice (provided it is differentiable), but mandatory L_1 regularization

a standard linear regression (min last square)
Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of deflected subgradient methods [reference: http://pages.di.unipi.it/frangio/abstracts.html#MPC16, http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf]
and
 a basic version of the linear least squares solver of your choice
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, algorithmic parameters and their setting, ...) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

a discussion of which method among the ones seen during the lectures (normal equations, with which inner solution method, QR, SVD) rates to be more effective for the linear least square problem, on grounds of stability and computational cost

the description of experiments aimed at finding the best algorithmic parameters (comprised different deflection/stepsize formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

the comparison of the behavior of the implemented min least square approach with the offtheshelf implementation available in your programming language in terms of speed and accuracy

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the linear regression and the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a NN with topology and activation function of your choice (provided it is differentiable), but mandatory L_1 regularization. Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of bundle methods [reference: http://www.tandfonline.com/doi/abs/10.1080/10556780290027828]
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the Master Problem of the bundle method), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (form of the master problem, algorithmic parameters and their setting, ...) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (size of the bundle, heuristics for management of the proximal term, ...) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with

a NN with topology of your choice, but mandatory piecewiselinear activation function (of your choice)

a standard linear regression (min last square)
Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of deflected subgradient methods [reference: http://pages.di.unipi.it/frangio/abstracts.html#MPC16, http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf]
and
 a basic version of the linear least squares solver of your choice
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (stopping criterion employed, algorithmic parameters and their setting, ...) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

a discussion of which method among the ones seen during the lectures (normal equations, with which inner solution method, QR, SVD) rates to be more effective for the linear least square problem, on grounds of stability and computational cost

the description of experiments aimed at finding the best algorithmic parameters (comprised different deflection/stepsize formulae if tested) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the linear regression and the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a NN with topology of your choice, but mandatory piecewiselinear activation function (of your choice). Implement yourself two training algorithms of the NN using:

a standard momentum descent approach [references: http://www.cs.toronto.edu/~fritz/absps/momentum.pdf]

an algorithm of the class of bundle methods [reference: http://www.tandfonline.com/doi/abs/10.1080/10556780290027828]
using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries except. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the Master Problem of the bundle method), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution methods, with a discussion of all relevant details (form of the master problem, algorithmic parameters and their setting, ...) for both

a summary of the known theoretical convergence results for the approaches and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (size of the bundle, heuristics for management of the proximal term, ...) for solving the problem at hand

the description of the behavior of the approaches on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time) when compared to each other;

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the NN) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using an algorithm of the class of smoothed gradient methods [references: https://link.springer.com/article/10.1007/s1010700405525, https://arxiv.org/abs/1008.4000], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approache and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (comprised different accelerated gradient formulae if tested) for solving the problem at hand

the description of the behavior of the approache on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

the comparison of the behavior of the implemented min least square approach with the offtheshelf implementation available in your programming language in terms of speed and accuracy

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using an algorithm of the class of bundle methods [references: http://www.tandfonline.com/doi/abs/10.1080/10556780290027828, https://papers.nips.cc/paper/3230bundlemethodsformachinelearning.pdf], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the Master Problem of the bundle method), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (form of the master problem, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (size of the bundle, heuristics for management of the proximal term, ...) for solving the problem at hand

the description of the behavior of the approach on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using an algorithm of the class of activeset methods [references: J. Nocedal, S. Wright, Numerical Optimization, http://www.jmlr.org/papers/volume7/scheinberg06a/scheinberg06a.pdf], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine, but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (management of the active set, ...) for solving the problem at hand

the description of the behavior of the approach on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using an algorithm of the class of interiorpoint methods [references: J. Nocedal, S. Wright, Numerical Optimization, http://epubs.siam.org/doi/abs/10.1137/S1052623400374379?journalCode=sjope8], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the Master Problem of the bundle method), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters (management of the active set, ...) for solving the problem at hand

the description of the behavior of the approach on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using an algorithm of the class of gradient projection methods [references: J. Nocedal, S. Wright, Numerical Optimization, http://www.tandfonline.com/doi/abs/10.1080/10556780512331318182], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the Master Problem of the bundle method), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (linear algebra techniques used, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters for solving the problem at hand

the description of the behavior of the approach on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files
 Participate to the ML cup competition associated with the Machine Learning course
https://elearning.di.unipi.it/mod/folder/view.php?id=3615
with a SVRtype approach of your choice (in particular, with one or more kernels of your choice). Implement yourself a training algorithm of the SVR using a FrankWolfe type (conditional gradient) [references: https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe_algorithm], using the programming language of your choice (C/C++, Python, Matlab) but no readytouse optimization libraries. For the avoidance of doubt, this means that you may use library functions (Matlab ones or otherwise) if an inner step of the algorithm requires them as a subroutine (for instance, for solving the LP within the algorithm  but do consider developing adhoc approaches exploiting the structure of your problem), but your final implementation should not be a single library call. Also, blatant copying from existing material, either provided by the teachers or found on the Internet, will be mercilessly crushed upon. Ask the teachers if you are uncertain about what this means in the context of your project (for instance, you are using a SVD, for which details were not seen in the lectures, a full implementation will not be required).
Required output of the project are:

A PDF document (LaTeX typesetting advised but not mandatory) describing in details:

the optimization problem to be solved

the implemented solution method, with a discussion of all relevant details (LP solution techniques, algorithmic parameters and their setting, ...)

a summary of the known theoretical convergence results for the approach and a discussion about whether or not they apply to the problem at hand and why

the description of experiments aimed at finding the best algorithmic parameters for solving the problem at hand

the description of the behavior of the approach on the provided data, evaluating the effectiveness (capability of finding good solutions) and efficiency (convergence rate and running time);

optionally, a comparison with efficiency and effectiveness of available offtheshelf tools (factoring in elements like difference of programming language if necessary) is appreciated

optionally, a comparison of utility of the obtained solutions in terms of Machine Learning performances (generalization capabilities of the SVR) is also appreciated; note that this is mandatory if the project is used for the ML course, too


The source code of the implemented approach, comprised any batch or auxiliary file required to run the experiments, properly documented and with README files describing structure and use of the package

Results of the experiments in spreadsheet/databases/text files