Università degli Studi di Napoli "Parthenope"

Teaching schedule

Academic year: 
Belonging course: 
Disciplinary sector: 
Year of study: 
First Semester
Hours of front activity: 


Whenever in the course there is any foreign student, the course will be held in english.

Course description

The aim of the course is to provide the theoretical and practical foundation of the methods of Machine Learning and Deep Learning.

Knowledge and ability to understand
The student must know techniques of calculus, linear algebra, probability and statistics, taught in B. Sc. of Computer Science Courses.
In this way he will able to undersatnd the theoretical foundations of machine learning taught in the course.
Finally, the student must be able to read papers written in English. In this way he will be able to use the resources of university library.

Application Capability
The student will be able to show capabilities of development and analysis of algorithms. Finally, he will must be able to use the main software libraries, for the implementation of the main machine learning and deep learning methods presented in the course.

Autonomy of Judgement
The student must be able to evaluate on its own the efficay and the efficiency of a machine learning and deep learning application in a real world domain. He should have an adequate autonomy and criticism to use the librarian resources.

Communication Skills
The student must be able to write a technical report in English and to prepare a presentation in english as well.

Ability to learn
At the end of the course, the student will have to get a Self-learning capability, namely he will be able to use autonomously both librarian resources and the o nes offered by Scholar google or Researchgate


Basics of Object Oriented Programming, Calculus,Linear Algebra, Probability and Statistics, Numerical Calculus


Machine Learning (part I)

Introduction to the Course
Artificial Intelligence vs. Machine Learning
Taxonomy of Machine Learning
Bayesian Theory of Decision (6h)
Neural Network: Perceptron, Multi-layer-Perceptron (2h)
Rete di Hopfield, Boltzmann machine, Restricted Boltzmann machine (2h)
Statistical Learning Theory: Bias-Variance Dilemma, VC dimension, Model Selection (Validation Set,
Crossvalidation, AIC, BIC), Minimum Description Length (4h)
Curse of Dimensionality, Dimensionality Reduction, Intrinsic Dimensionality of Data, Principal Component Analysis (PCA),
Multi-Dimensional Scaling (MDS): Sammon Algorithm (8h)
Metodi Kernel: SVM, SVM for Regression, One Class SVM, Kernel PCA (8h)
Clustering: Hierarchical Clustering, Partitional Clustering, Expectation-maximization (EM) Algorithm, K-means, Gaussian Mixtures (8h)
Ensemble Methods; Bagging, Boosting, Adaboost, Decision Trees, CART, Random Forests (8h)

Machine Learning (part II)

Introduction to the Course
Artificial Intelligence vs. Deep Learning
Deep Learning vs. Machine Learning
Taxonomy of Deep Learning
Foundations of Neural Networks: Biological Neuron vs. Artificial Neuron, Hebb's rule (2 h)
Self-organizing Neural Networks: Oja'a and Sanger's rules, Neural Network for Principal and Independent Component Analysis (2 h)
Neural networks based on competition mechanisms: Kohonen's Maps and Self Organizing Maps, Adaptive Resonance Theory (2 h)
Reti Neurali supervisionate: Single-Layer Networks, Fisher's linear discriminant, Multi-Layer Perceptron (MLP), MLP and properties of Universal Approximation, Kolmogorov theorem, Back-propagation algorithm, MLP vs Radial Basis Functions, Error functions, MLP and Vapnik – Chervonenkis dimension (8 h)
Supervised Neural networks and optimization algorithms: Descending gradient, conjugate gradient, scaled conjugate gradient, Newton method, Levenberg-Marquardt algorithm, constrained optimization (4 h)
Pre-processing and extraction of features: Whitening, criteria for selecting features (2 h)
Learning and generalization: bias-variance dilemma, regularization, neural networks committee, mixture of experts, cross-validation (4 h)
Deep Neural Networks: Architecture, learning algorithms, Convolutive Neural Networks, Recurrent and Recursive Neural Networks, Echo State Networks, Long Short-Term Memory (16 h)
Methodologies for Deep Learning: Autoencoders, Sparse Coding, Dictionary Learning, Representation Learning, Generative Models, Generative Adversial Neural Network (8 h)
Validation methods: Confusion matrix and indices, ROC curve, statistical significance (2 h)

The course aims to provide the theoretical and practical foundations of Machine Learning and Deep Learning.

Teaching Methods

Teaching is performed by means of frontal and laboratory lessons and particular seminars organized by the same students on particular topis of their interest. In e-learning there wwill be the possibility to use video-lessons recordered by the same students in their class.


T. Hastie, R. Tibshirani, J. Friedman, “The Elements of Statistical Learning: Data Mining, Inference, and Prediction”, 2nd Edition, 2008, Springer, ISBN: 978-0387848570
M. Gori, “Machine Learning: A Constraint-Based Approach”, 2017, Morgan Kauffman, ISBN: 978-0081006597
F. Camastra, A. Vinciarelli, “Machine Learning for Audio, Image and Video Analysis: Theory and Applications”, 2nd Edition, 2016, Springer Verlag, ISBN: 978-1447168409
J. Shawe-Taylor, N. Cristianini, “Kernel Methods for Pattern Analysis”, 2004, Cambridge University Press, ISBN: 978-0521813976
R.O. Duda, P.E. Hart, D.G. Stork, "Pattern Classification", 2nd edition, Wiley and Sons, 2000.
J. C. Bishop, "Pattern Recognition and Machine Learning", Springer, 2006
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016
A. Geron, Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, O'Reilly, 2019

Learning assessment

In order to pass the exam, the student must prepare a short dissertation, written in English language, on a machine learning and deep learning topic.
In the oral exam, the student will discuss his dissertation (50% marks) and will have to show that he has learnt machine learning and deep learning notions dealt during the course (50% the marks)

More information