Predicting Optimal Tuning Parameters for GPU Compute Kernels using Deep Sequence-to-Sequence Models
A sequence-to-sequence deep learning model can accurately predict the optimal tuning parameters for GPU compute kernels by translating the input tensor descriptors to the corresponding kernel parameter configurations.