Learning Optimization Algorithms with Provable Generalization Guarantees and Convergence Trade-offs
The authors present a framework to learn optimization algorithms with provable generalization guarantees (PAC-Bayesian bounds) and an explicit trade-off between convergence guarantees and convergence speed, in contrast to typical worst-case analysis.