The content discusses the problem of independence testing, where the goal is to determine whether two random variables X and Y are statistically independent. Traditional methods often make strong parametric assumptions, which can limit their applicability.
The authors propose a scheme for selecting the kernels used in a Hilbert-Schmidt Independence Criterion (HSIC)-based independence test. The key idea is to choose the kernels by maximizing an estimate of the asymptotic test power, which the authors prove approximately maximizes the true power of the test.
Specifically, the authors use deep kernels parameterized by neural networks, which can capture complex dependencies between X and Y. They show that optimizing the parameters of these deep kernels to maximize the estimated test power leads to much more powerful tests compared to standard kernel choices, especially in high-dimensional settings.
The authors provide theoretical analysis showing that the learned deep kernels generalize well and lead to powerful tests asymptotically. They also demonstrate the effectiveness of their approach through extensive experiments on synthetic datasets, where the deep kernel-based tests significantly outperform various baselines.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Nathaniel Xu... klo arxiv.org 09-12-2024
https://arxiv.org/pdf/2409.06890.pdfSyvällisempiä Kysymyksiä