核心概念
The authors investigate efficient methods for low-rank matrix approximation using truncated SVD, focusing on GPU implementations. They compare the performance of randomized SVD and block Lanczos algorithms.
要約
The content discusses the implementation of truncated SVD for low-rank matrix approximation, emphasizing GPU optimization. It compares randomized SVD and block Lanczos algorithms, highlighting their performance advantages. The experiments with sparse matrices show the efficiency of the block Lanczos algorithm in achieving accurate approximations. The paper provides insights into dimensionality reduction techniques in data science and their practical applications.
統計
For this purpose, we develop and optimize GPU implementations for the randomized SVD and a blocked variant of the Lanczos approach.
Furthermore, the experiments with several sparse matrices arising in representative real-world applications and synthetic dense test matrices reveal a performance advantage of the block Lanczos algorithm when targeting the same approximation accuracy.
In many applications, we are interested in obtaining a truncated SVD, of a certain order r.
We address the efficient computation of low-rank matrix approximations via the computation of a truncated SVD, with a special focus on numerical reliability and high performance.
The conventional methods for computing the SVD are quite expensive in terms of floating point arithmetic operations (flops).
引用
"The experiments with several sparse matrices arising in representative real-world applications and synthetic dense test matrices reveal a performance advantage of the block Lanczos algorithm when targeting the same approximation accuracy."
"Furthermore, we complete the experimental analysis of the methods with a detailed performance evaluation on an NVIDIA Ampere A100 graphics processor."