Evaluating the Potential and Limitations of Parallel Deep Learning Inference on Heterogeneous Mobile Processors
Parallel execution of deep learning inference across heterogeneous mobile processors holds potential to accelerate on-device intelligence, but its practical effectiveness is limited by unsupported operators, process fallbacks, and the need to balance resource utilization with overall system performance.