The authors observe that matrix factorization (MF) is a widely used collaborative filtering algorithm for recommendation systems, but the computational complexity increases dramatically as the number of users and items grows. Existing works have accelerated MF by adding computational resources or using parallel systems, which incurs high costs.
The authors first observe that the decomposed feature matrices exhibit fine-grained structured sparsity, where certain latent vectors have more insignificant elements than others. This fine-grained sparsity causes unnecessary computations during both matrix multiplication and latent factor update, increasing the training time.
To address this, the authors propose two key methods:
Feature matrix rearrangement: They rearrange the feature matrices based on joint sparsity, making latent vectors with smaller indices more dense than those with larger indices. This minimizes the error caused by the later pruning process.
Dynamic pruning: They propose to dynamically prune the insignificant latent factors during both matrix multiplication and latent factor update, based on the sparsity of the latent factors for different users/items. This accelerates the training process.
The experiments show the proposed methods can achieve 1.2-1.65 speedups, with up to 20.08% error increase, compared to the conventional MF training process. The authors also demonstrate the methods are applicable with different hyperparameters like optimizer, optimization strategy, and initialization method.
Para outro idioma
do conteúdo fonte
arxiv.org
Principais Insights Extraídos De
by Yining Wu,Sh... às arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.04265.pdfPerguntas Mais Profundas