MACHINE LEARNING M - Z

Academic Year 2025/2026 - Teacher: SIMONE PALAZZO

Expected Learning Outcomes

Knowledge and Understanding

This course provides a basic understanding of Machine Learning techniques and algorithms, with a particular focus on regression, classification, and unsupervised learning models. Students will learn to evaluate model performance using error metrics and validation techniques, addressing topics such as overfitting and the bias-variance tradeoff. The course also explores regularization methods, ensemble algorithms like bagging and boosting, and neural networks.

Applied Knowledge and Understanding

The course includes practical examples and exercises that will allow students to apply Machine Learning methods to real-world problems, using software tools commonly used in the industry such as scikit-learn. Students will learn how to design and implement Machine Learning models, manage data loading and preprocessing, and validate performance using standard metrics.

Making Judgments

Students will develop the ability to assess the performance of Machine Learning models, identify and mitigate overfitting and bias issues, and choose between different models and optimization techniques based on the context of the problem.

Communication Skills

Students will acquire the ability to effectively communicate the results of their analyses and models, both in written and oral form. They will be able to present the outcomes of their research and projects to both technical and non-technical audiences, using appropriate language and supporting their arguments with relevant data and visualizations.

Learning Skills

The course will encourage students to develop a critical and autonomous approach to learning, fostering curiosity about new technologies and trends in the field of Machine Learning and Deep Learning. Students will be able to continue learning independently, using academic and professional resources to stay updated on the latest innovations and techniques in the sector.

Course Structure

- Lectures, to provide theoretical and methodological knowledge of the subject.

- Practical exercises, to develop problem-solving skills and apply the design methodology.

- Laboratories, to learn and test the use of related tools.

- If the course is delivered in a hybrid or remote format, modifications to the above may be required.

Required Prerequisites

A preliminary knowledge of programming and the fundamentals of linear algebra and mathematical analysis is required.

Attendance of Lessons

Strongly suggested

Detailed Course Content

1. Basic concepts of machine learning

  • 1.1. Models and parameters (2 hours)

  • 1.2. Learning paradigms (supervised, unsupervised, self-supervised, reinforcement learning) (2 hours)

  • 1.3. Performance evaluation (precision, recall, F1-score, ROC curve and AUC, MAE, MSE, cross-validation and overfitting, bias-variance trade-off) (4 hours)


2. Supervised learning

  • 2.1. Linear regression (3 hours)

  • 2.2. Regularization (2 hours)

  • 2.3. Linear and non-linear classification (6 hours)

  • 2.4. Support vector machines (3 hours)

  • 2.5. Decision trees, bagging and boosting (6 hours)

  • 2.6. Non-parametric classifiers (3 hours)

  • 2.7. Neural networks (6 hours)


3. Unsupervised learning

  • 3.1. Clustering (3 hours)

  • 3.2. Dimensionality reduction (3 hours)

  • 3.3. Introduction to self-supervised, contrastive, semi-supervised learning techniques (1 hour)


4. Machine learning laboratory with Python

  • 4.1. Syntax, data types, control structures, classes (4 hours)

  • 4.2. Libraries for machine learning (10 hours)

Textbook Information

Study material provided by the teachers.

Course Planning

 SubjectsText References
1Basic concepts
2Laboratory of Python:  basic concepts
3Linear regression and optimization
4Laboratory on linear regression
5Performance evaluation
6Laboratory on performance evaluation
7Regularization
8Laboratory on regularization
9Classification
10Laboratory on classification
11PCA
12Laboratory on PCA
13Unsupervised learning
14Unsupervised learning laboratory
15Decision trees and  bagging/boosting
16Laboratory on decision trees and  bagging/boosting
17Neural Networks
18Laboratory on neural networks

Learning Assessment

Learning Assessment Procedures

The course completion exam includes a practical final exam on the computer and an oral final exam.

The practical final exam consists of solving a machine learning problem: loading the data, preprocessing and analyzing the dataset, training regression/classification models, and evaluating performance. The exam will be held on the university's multimedia lab computers.

The oral final exam consists of a discussion of the practical exam (describing design choices, commenting on results, and explaining the implementation) and theoretical questions.

Throughout the course, weekly exercises will be assigned to be completed at home and submitted by specific deadlines. The submission and positive evaluation of all exercises will grant students access to the in-itinere exam. The in-itinere exam will consist of a practical in-itinere exam, similar to the final exam, and a written in-itinere exam with open-ended theoretical questions. The in-itinere exam is considered passed if both parts are passed. Students who pass the in-itinere exam can take a reduced practical final exam and a reduced oral final exam for the course completion exam, both of which will cover topics addressed after the in-itinere exam. The final grade will be the average of the grades achieved in the in-itinere exam and the reduced final exam.

Examples of frequently asked questions and / or exercises

- Explain the bias-variance trade-off and how it affects the generalization ability of a machine learning model.

- Describe the least squares method and how it is used to estimate the parameters of a linear regression model.

- What is the difference between precision and recall? In which scenarios might one be more important than the other?

- Compare Ridge regression and LASSO, specifying when it would be preferable to use one over the other.

- How does the kernel trick extend the capabilities of SVMs to solve non-linearly separable problems?

- How does the PCA algorithm work for dimensionality reduction, and what are its limitations?

- Explain how a boosting algorithm like AdaBoost works and what its advantages are compared to individual decision trees.

- Explain backpropagation and its role in training neural networks.

- Explain the role of activation functions in a neural network.