Core Concepts
The authors present a novel algorithm for undercomplete symmetric tensor decomposition, running in linear time, with robustness to noise and smoothed analysis of the condition number.
Abstract
The paper introduces an efficient algorithm for symmetric tensor decomposition, focusing on undercomplete cases. It provides insights into the condition number analysis and presents a robust approach to handle noise in the input tensor. The algorithm runs in linear time and offers a unique perspective on smoothed analysis.
The study delves into the complexities of symmetric tensor decompositions, emphasizing undercomplete scenarios. By introducing a randomized algorithm, the authors address challenges related to noise tolerance and computational efficiency. The research contributes valuable insights into the field of tensor decomposition, offering practical solutions for real-world applications.
Key points include:
- Introduction to symmetric tensor decompositions.
- Novel algorithm for undercomplete tensor decomposition.
- Robustness against noise in input tensors.
- Linear time complexity and smoothed analysis of condition numbers.
Stats
Our main result is a reduction to the complete case (r = n) treated in our previous work [KS23b].
The algorithm requires O(n3+TMM(r) log2(rBε)) arithmetic operations.
Let T ′ ∈ Cn⊗Cn⊗Cn be such that ||T−T ′|| ≤ δ ∈ (0, 1/(nB)CrC log4( rBε )).
Then on input T ′, a desired accuracy parameter ε and some estimate B ≥ κ(T), Algorithm 10 outputs an ε-approximate solution to the tensor decomposition problem for T with probability at least 1 − 13/r^2 * (1 − (5/4r + 1/(4C^2CW^(r3/2))).
Quotes
"The main idea is that numerically stable subroutines can be composed in a numerically stable way under certain conditions."
"Our main result is a robust randomized linear time algorithm for ε-approximate tensor decomposition."