Section outline

  • Coding Projects

    • types of project: Coding projects can be an implementation of a method related to the course or an application of the course concepts in a new domain (e.g. NLP).
    • proposal: a preliminary proposal must be submitted for approval before starting the project. This will ensure that the project is suitable for the course. The proposal must include: the topic, a brief description of the project, and the resources that you intend to use during the project (software libraries, dataset, github repositories, papers, ...).
    • empirical validation: Implementations of ML methods must have an empirical evaluation. Unless the implemented method is very simple, a small and minimal evaluation is sufficient. Projects focusing on the application domain and using off-the-shelf methods should provide a more extensive evaluation. More specific details will be provided based on the project proposal. As a general rule, a good project should have at least an ablation study that highlights the property of the algorithm (e.g. is the additional regularization actually working? what happens when we change the regularization coefficient?) and compare against simple baselines (e.g. reservoir replay vs a more sophisticated method). 
    • project submission: the student must submit the project as a zip file or a link to a github repository before the exam deadline. A written report is NOT required.
    • project presentation during the oral exam: coding projects need to briefly describe the project in a short presentation (10min, max. 10 slides or a jupyter notebook), followed by questions on the results and codebase. The presentation should describe:
    1. project objectives - brief recap of the method + tooling used
    2. any critical part of the implementation - were there any implementation challenges? explain them and how you solved them
    3. overview of the results - experimental setup, model selection and hyperparameter choices, results
    4. conclusion - did you expect the results? any challenges during the empirical validation?

    Examples (more will come later):

    • implementation of an online tree ensemble (HOT, SRP, ...)
    • implementation of a meta-learning algorithm not shown in class (e.g. one of ANIL, REPTILE)
    • implementation of a class-incremental method not shown in class (e.g. IL2M, BiC)
    • implementation of a simple online CL method (DER or ER-ACE are good candidates)
    • Implementation of a federated learning method (e.g. FedSGD)
    • implementation of parameter-efficient finetuning for continual learning (LoRA, adapters, ...)
    • topics connected to the arguments of the course but not necessarily discussed in class can also be accepted (e.g. test-time adaptation, in-context learning, online RL, ...)

     

    Reports

    • types of project: A report is a literature review covering a small topic related to the course.
    • proposal: topic title, brief description, preliminary list of papers. The review should cover around 5 papers not covered in the lecture.
    • style: the report must be a single-column document with 8/15 pages.
    • structure: The report must describe the methodology of the chosen papers, identifying their key properties, strong/weak points of each methods, and comparing them with the literature and with the topic shown in class. Comparing and contrasting methods, identifying strengths and weaknesses is the most important part of the report. Example of report structure:
    1. abstract
    2. Intro
    3. Problem Setting - using the nomenclature seen in class
    4. Methodology - explain the methods in the studied papers
    5. Analysis and Comparison between Methods - find similarities and differences between them
    6. Strengths and Weaknesses - Highlight where the methods might excel or fail, both using empirical and theoretical evidence from the papers or your intuition from the course
    7. Conclusion - draw the conclusion, recap the results of the report, and highlight any other possible question that you feel is interesting but not answered by the papers (future work)
    • project submission: the report must be submitted before the exam deadline.
    • project presentation during the oral exam: a short presentation (15min) about the report content. The suggested report structure is also good for the slides.
    Examples (more will come later):
    • a survey on concept drift detection [ref]
    • a review of online learning with linear models or online regression models (models that we have not seen in class)
    • model-based meta-learning [1]
    • meta-learned optimizers [1]
    • contrastive learning for vision
    • a review of class-incremental methods not seen in the lectures [1]
    • continual-meta-learning [1,2]
    • continual learning for embedded systems [1,2]
    • federated learning methods [1]
    • federated continual learning [1]
    • parameter-efficient training of large language models [1]
    • topics connected to the arguments of the course but not necessarily discussed in class can also be accepted (e.g. test-time adaptation, in-context learning, online RL, ...)

     

    Oral Exam - Excluded Topics

    The oral exam is about the theoretical part of the course. The following arguments are excluded from the oral exam:

    • concept drift: CUSUM/PageHinkley
    • online classification: Decision trees (e.g. Hoeffding trees)
    • SSL: ExemplarCNN, learning from image patches, BYOL
    • CL regularization: Synaptic Intelligence 
    • CL architectural: HAT

    Everything else is part of the oral exam (except for seminars and pytorch code). Feel free to ask questions if you have doubts.

    Validity and Duration of the Project

    In case of a failed oral exam, the project may be kept as is or not, depending on its quality. This will be evaluated on a case by case basis. As a general rule, projects with a positive evaluation will be kept until the next session, but no later than that.