Core Concepts
This paper presents the first (1+ε)-approximation algorithm for the min-sum subset convolution problem, running in time independent of the largest input value M.
Abstract
The paper studies the min-sum subset convolution problem, which is a fundamental tool in parameterized algorithms but has a prohibitively expensive exact evaluation time of O(3^n). Previous work has addressed this by embedding the min-sum semi-ring into the sum-product ring, but this introduces a dependence on the largest input value M in the running time.
The authors propose an (1+ε)-approximation algorithm for the min-sum subset convolution that runs in time ̃O(2^(3n/2)/√ε), independent of M. This is achieved by:
Providing an exact algorithm for the min-max subset convolution, which runs in ̃O(2^(3n/2)) time. This is a generalization of Kosaraju's algorithm for min-max sequence convolution.
Establishing an equivalence between exact min-max subset convolution and (1+ε)-approximate min-sum subset convolution, extending the framework of Bringmann et al.
Designing an improved (1+ε)-approximation algorithm for min-sum subset convolution, adapting the techniques of Bringmann et al. to the subset convolution setting.
The authors then show how this improved approximation algorithm can be used to obtain (1+ε)-approximation schemes for several NP-hard problems that rely on min-sum subset convolution, such as minimum-cost k-coloring and prize-collecting Steiner tree, with running times independent of M.
Stats
The naïve algorithm for min-sum subset convolution takes O(3^n) time.
The fastest known algorithm for min-sum subset convolution runs in ̃O(2^nM) time, where M is the largest input value.
Quotes
"Is there a faster-than-naïve (1 + ε)-approximation algorithm for the min-sum subset convolution problem with running time independent of M?"
"Are there faster-than-naïve (1 + ε)-approximation schemes for convolution-like NP-hard problems with running time independent of M?"