Core Concepts
We present an O(1)-round fully-scalable deterministic massively parallel algorithm for computing the min-plus multiplication of subunit-Monge matrices. This result enables an O(log n)-round fully-scalable massively parallel algorithm for solving the exact longest increasing subsequence (LIS) problem, substantially improving the previously known O(log^4 n)-round algorithm.
Abstract
The paper presents a highly efficient massively parallel algorithm for computing the min-plus multiplication of subunit-Monge matrices, which is a fundamental operation with applications to the longest increasing subsequence (LIS) problem.
The key highlights are:
The authors devise an O(1)-round fully-scalable deterministic MPC algorithm for computing the implicit subunit-Monge matrix multiplication. This substantially improves upon the previous best O(log^2 n)-round algorithm.
Using this result, the authors derive an O(log n)-round fully-scalable MPC algorithm for solving the exact LIS problem, which significantly betters the previous O(log^4 n)-round algorithm.
The authors also show how their techniques can be applied to solve the semi-local LIS and longest common subsequence (LCS) problems in O(log n) rounds in the fully-scalable MPC model.
The key technical innovations include a novel way of decomposing the subunit-Monge matrix multiplication problem into smaller subproblems and efficiently combining the results, as well as leveraging the monotonicity properties of the subunit-Monge matrices.
Overall, the paper presents a major advancement in massively parallel algorithms for fundamental problems like LIS, with broad implications across computer science.