Core Concepts
The authors propose a probabilistic bound for the round-off error in Cholesky decomposition-based linear least squares solution implemented using low-precision arithmetic. This bound is much closer to the practical error compared to existing numerical bounds.
Abstract
The authors address the problem of efficiently solving the linear least squares (LS) problem using Cholesky decomposition in low-precision arithmetic. The key points are:
Existing numerical bounds for the Cholesky decomposition accuracy are too conservative and lead to overestimation of the required precision.
The authors propose a probabilistic bound for the round-off error in the Cholesky decomposition, which is much closer to the practical error observed in simulations.
The authors make several assumptions to derive the probabilistic bound, including using random Gaussian matrices to model the round-off errors and considering matrices from the RANDSVD ensemble.
The derived bound shows the round-off error in the LS solution is proportional to the condition number of the channel matrix H, rather than the matrix size N as in the existing numerical bounds.
Simulation results for both random RANDSVD matrices and realistic channel matrices from the QuaDRiGa model demonstrate the proposed bound accurately predicts the observed round-off errors, allowing for more efficient selection of the required arithmetic precision.
The authors conclude that when the round-off error is lower than other dominant errors (e.g., channel estimation error), the bitwidth in the Cholesky decomposition can be reduced without performance loss.
Quotes
"Existing bounds of Cholesky decomposition accuracy mostly employ numerical analysis to derive an upper bound of the resulting round-off error. Such bound is derived for the worst case and much exceeds practical values."
"Our results may help to predict the minimum required precision for the arithmetic operations, involved in linear LS, which do not yet lead to the loss of performance."