Core Concepts
Efficient algorithms with theoretical guarantees can recover the underlying graph structure, i.e., the permutation mapping tensor indices to the vertices, for one-dimensional tensor networks in the tensor ring and tensor train formats.
Abstract
The content discusses the problem of recovering the underlying graph structure, i.e., the permutation mapping tensor indices to vertices, for one-dimensional tensor networks in the tensor ring (TR) and tensor train (TT) formats.
The key highlights are:
The TR and TT formats represent high-order tensors using a group of 3-tensors ordered as a loop or a path, respectively. The underlying graph structure, specified by a permutation, is often unknown and needs to be recovered.
The authors propose polynomial-time algorithms to efficiently recover the underlying permutation for TR and TT formats. The algorithms compare the matricization ranks after downsampling the tensor, with complexity O(d log d) for d-th order tensors.
The authors prove that their algorithms can almost surely recover the correct permutation when tensor entries can be observed without noise. They further establish the robustness of the algorithms against observational noise.
The theoretical results are validated through numerical experiments. The content provides a rigorous analysis of the proposed algorithms, including complexity bounds, almost sure correctness in the noiseless case, and robustness guarantees.