Core Concepts
This work provides new approximation guarantees for adaptive combinatorial maximization that subsume and strengthen previous results. It introduces a new policy parameter, the maximal gain ratio, which is less restrictive than the greedy approximation ratio and can lead to stronger guarantees. The guarantees support non-greedy policies, nearly adaptive submodular utility functions, and both maximization under a cardinality constraint and minimum cost coverage objectives.
Abstract
The content discusses the problem of adaptive combinatorial maximization, which is a core challenge in machine learning with applications in active learning and other domains. In this problem, elements with hidden states are selected sequentially, and past observations are used to make the next selection decision. The goal is to obtain high utility while selecting few elements.
The authors provide new comprehensive approximation guarantees that subsume and strengthen previous results. They introduce a new policy parameter, the maximal gain ratio, which is shown to be strictly less restrictive than the greedy approximation ratio used in prior work. The guarantees support policies that are not necessarily greedy, utility functions that are not necessarily adaptive submodular, and both the maximization under a cardinality constraint and minimum cost coverage objectives.
The authors also provide an improved approximation guarantee for a modified prior, which is crucial for obtaining active learning guarantees that do not depend on the smallest probability in the prior.
Stats
There are no key metrics or important figures used to support the author's key logics.
Quotes
There are no striking quotes supporting the author's key logics.