Conceptos Básicos
The core message of this work is to introduce a novel formulation for multi-task submodular optimization that achieves local distributional robustness within the neighborhood of a reference distribution, which assigns importance scores to each task.
Resumen
The authors approach the problem of multi-task submodular optimization from the perspective of local distributional robustness. They propose a regularization term that uses the relative entropy to the standard multi-task objective, which is shown to be equivalent to the maximization of another submodular function. This allows for efficient optimization using standard greedy selection methods.
The key highlights and insights are:
- The authors introduce a novel formulation that incorporates a reference distribution to assign importance scores to each task, and a regularization term based on the relative entropy to this reference distribution.
- They demonstrate that this novel formulation is equivalent to the maximization of another submodular function, which can be efficiently optimized using standard greedy methods.
- The authors analyze the use of different statistical distances, such as the L-infinity norm and the relative entropy, as the regularization term, and show their theoretical properties.
- They propose an application of the relative entropy-regularized approach to the problem of online submodular optimization, where the goal is to reuse the same subset of elements over multiple time steps.
- The authors validate their theoretical results through numerical experiments, including a sensor selection problem for a low Earth orbit satellite constellation and an image summarization task using neural networks.