This paper presents a preliminary study on utilizing GPU (Graphics Processing Unit) to accelerate computation for three simulation optimization tasks: mean-variance portfolio optimization, multi-product Newsvendor problem, and a binary classification problem.
The key highlights are:
GPU architecture and its advantages for parallel processing of large-scale matrix and vector operations, as well as concurrent sampling for estimating objective values or gradients.
Implementation of the Frank-Wolfe algorithm for the first two tasks and a stochastic quasi-Newton algorithm for the binary classification problem, with GPU-based acceleration.
Numerical experiments demonstrating that the GPU implementation can achieve 3 to 6 times faster computation time compared to the CPU-based implementation, with similar solution accuracy. The relative benefit of GPU increases as the problem scale grows.
Limitations of the study include reliance on third-party GPU acceleration packages, not fully exploring the specific contributions of GPUs at various computational stages, and focusing only on gradient-based methods.
Overall, the results suggest that leveraging the parallel processing power of GPUs can significantly improve the efficiency of simulation optimization algorithms, especially for large-scale problems.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Jinghai He,H... о arxiv.org 04-19-2024
https://arxiv.org/pdf/2404.11631.pdfГлибші Запити