2025. 05. 07. 14:00 - 2025. 05. 07. 15:30
Rényi Intézet Nagyterem
-
-
Esemény típusa: szeminárium
Szervezés: Intézeti
-
Valószínűségelmélet szeminárium

Leírás

In this presentation, we will focus on a method for constructing prediction sets in a federated learning setting where only one round of communication between the agents and the server is allowed (one-shot). More precisely, by defining a particular estimator called the quantile-of-quantiles, we will prove that for any distribution, it is possible to find an algorithm that produces marginally (and training-conditionally) valid prediction sets. We will also prove upper bounds on the coverage of all proposed algorithms when the nonconformity scores are almost surely distinct. For algorithms with training-conditional guarantees, these bounds are of the same order of magnitude as those of the centralized case. Remarkably, this implies that the one-shot federated learning setting entails no significant loss compared to the centralized case. Finally, over a wide range of experiments, we will show that we are able to obtain prediction sets whose coverage and length are very similar to those obtained in a centralized setting, making our method particularly well-suited to perform conformal predictions in a one-shot federated learning setting.