Основні поняття
Ensembles improve with convex loss functions.
Анотація
The article explores ensemble methods' performance based on the type of loss function used. It discusses how ensembles can improve with convex loss functions and deteriorate with nonconvex ones. The study is illustrated through medical prediction and movie rating experiments, showcasing the impact of different ensemble sizes on accuracy and cross-entropy. Various aggregation schemes are analyzed, emphasizing the importance of averaging predictions for ensemble success.
Статистика
In this setting, we show that ensembles are getting better all the time if, and only if, the considered loss function is convex.
For convex loss functions and exchangeable predictions (Section 3), the expected loss is a non-increasing function of the number of ensemble members (Theorem 2).
For nonconvex loss functions and independent and identically distributed (i.i.d.) predictions (Section 4), we show that good ensembles keep getting better asymptotically.
Цитати
"We illustrate our results on a medical prediction problem (diagnosing melanomas using neural nets) and a “wisdom of crowds” experiment (guessing the ratings of upcoming movies)." - Pierre-Alexandre Mattei and Damien Garreau.
"Monotonicity beyond ensembles." - Authors.
"The empirical successes of ensembles seem to indicate that the more models are being aggregated, the better the ensemble is." - Grinsztajn et al.