8th MBC2 Workshop on
Models and Learning in Clustering and Classification
Catania, 25-28 August 2026
Dipartimento di Economia e Impresa, Università di Catania (Italy)
Probably Approximately Correct (PAC) learning is a theoretical framework that addresses the problem of learning a function from a set of samples in a way that is both probably correct and approximately correct. In simpler terms, PAC learning formalizes the conditions under which a learning algorithm can be expected to perform well on new, unseen data after being trained on a finite set of examples.
PAC learning is concerned with the feasibility of learning in a probabilistic sense. It asks whether there exists an algorithm that, given enough examples, will find a hypothesis that is approximately correct with high probability. The "probably" aspect refers to the confidence level of the algorithm, while the "approximately correct" aspect refers to the accuracy of the hypothesis.
The tutorial is scheduled for 25 August 2026, from 17:00 to 19:30, and 26 August 2026, from 9:30 to 12:00.
Confirmed Lecturers
Tutorial registration fee:
| Day | Time | Activity | Venue |
|---|---|---|---|
| 25 Aug, 2026 | 16:30-17:00 | Registration | Dept. of Economics and Business (Corso Italia, 55) - Room 1 |
| 16:15-18:45 | Lecture 1 - Ata Kaban | ||
| 26 Aug, 2026 | 09:30-10:00 | Welcome Coffee | Camplus Catania (Via Monsignor Ventimiglia, 184) |
| 10:00-12:30 | Lecture 2 - Pascal Germain |
Contents and references
Lecture 1: PAC learning and random projections (Ata Kaban)
In the first hour we begin with the high-dimensional probability tools that underpin both PAC learning and random projections. In both contexts, we develop uniform concentration inequalities for finite classes, showing that high-probability control of the maximum deviation over a family of random quantities is the key to both statistical generalisation and geometry-preserving dimensionality reduction.
In the second hour, we extend this viewpoint to infinite classes by introducing complexity measures that replace counting arguments, such as covering numbers and Rademacher complexity. We conclude with a brief discussion of implications for learning from randomly projected data and related modern perspectives.
References
Lecture 2 - PAC-Bayesian Learning (Pascal Germain)
In the first hour, I will present the foundation of PAC-Bayesian learning for classification and regression problems and show how this framework reconciles the Probably Approximately Correct (PAC) hypotheses and the Bayesian inference framework.
The second hour will give an overview of several recent usages of the PAC-Bayesian learning framework to conceive "self-certified" learning algorithms, such as applications to neural networks, domain adaptation, meta learning, and generative methods.
References
For more information: team.mbc2@unict.it
Last update: 27 April 2026.