Core Concepts
Leveraging the parallel processing capabilities of GPUs can significantly accelerate the computation of simulation optimization algorithms while maintaining similar solution accuracy.
Abstract
This paper presents a preliminary study on utilizing GPU (Graphics Processing Unit) to accelerate computation for three simulation optimization tasks: mean-variance portfolio optimization, multi-product Newsvendor problem, and a binary classification problem.
The key highlights are:
GPU architecture and its advantages for parallel processing of large-scale matrix and vector operations, as well as concurrent sampling for estimating objective values or gradients.
Implementation of the Frank-Wolfe algorithm for the first two tasks and a stochastic quasi-Newton algorithm for the binary classification problem, with GPU-based acceleration.
Numerical experiments demonstrating that the GPU implementation can achieve 3 to 6 times faster computation time compared to the CPU-based implementation, with similar solution accuracy. The relative benefit of GPU increases as the problem scale grows.
Limitations of the study include reliance on third-party GPU acceleration packages, not fully exploring the specific contributions of GPUs at various computational stages, and focusing only on gradient-based methods.
Overall, the results suggest that leveraging the parallel processing power of GPUs can significantly improve the efficiency of simulation optimization algorithms, especially for large-scale problems.
Stats
The GPU implementation achieves 3 to 6 times faster computation time compared to the CPU-based implementation across the three simulation optimization tasks.
The relative squared error (RSE) of the objective values between the GPU and CPU implementations is similar, within 2-3% across different iteration steps.
Quotes
"Leveraging the parallel processing capabilities of GPUs can significantly accelerate the computation of simulation optimization algorithms while maintaining similar solution accuracy."
"The relative benefit of GPU increases as the problem scale grows."