Optimal Universal Predictors Parameterized by Rényi Divergence
The authors introduce a new class of universal predictors, called α-NML, that interpolates between well-known predictors like the mixture estimators and the Normalized Maximum Likelihood (NML) estimator. The α-NML predictors are shown to be optimal under a new regret measure based on Rényi divergence, which can be interpreted as a middle ground between average and worst-case regret.