Core Concepts
Orthogonal Bootstrap is a new method that reduces the computational cost of Bootstrap resampling for simulating input uncertainty, while maintaining the same accuracy and statistical guarantees as the original Bootstrap approach.
Abstract
The content discusses a new method called Orthogonal Bootstrap that aims to efficiently simulate input uncertainty using Bootstrap resampling. The key insights are:
Bootstrap resampling can be computationally expensive, especially when the number of samples is large. This is because it requires a large number of Monte Carlo replications to achieve reasonable accuracy.
Orthogonal Bootstrap decomposes the target being simulated into two parts: the non-orthogonal part, which has a closed-form result known as Infinitesimal Jackknife, and the orthogonal part, which is easier to simulate.
By separately treating the non-orthogonal and orthogonal parts, Orthogonal Bootstrap can significantly reduce the number of required Monte Carlo replications compared to the standard Bootstrap method, while maintaining the same accuracy and statistical guarantees.
Theoretically, the authors show that the simulation error of Orthogonal Bootstrap scales as O(1/n^(2+α)) compared to O(1/n^(1+α)) for standard Bootstrap, where n is the sample size and α is a parameter related to the number of Monte Carlo replications.
Empirically, the authors demonstrate that Orthogonal Bootstrap outperforms standard Bootstrap in both debiasing and variance estimation tasks when the number of Monte Carlo replications is limited.