The content discusses the design of HiHGNN, a high-performance accelerator for Heterogeneous Graph Neural Networks (HGNNs).
Key highlights:
Characterization of HGNN models on GPU reveals that different stages exhibit diverse execution bounds, leading to unbalanced utilization across hardware components.
Proposed a bound-aware stage-fusion methodology, including a novel programming model and hardware datapath, to fuse and pipeline the execution of stages with different bounds.
Designed an independency-aware parallel execution to exploit the high-degree inter-semantic-graph parallelism, involving scale-up optimization and workload-aware scheduling.
Proposed a similarity-aware execution scheduling to maximize the reuse of intermediate results across the processing of semantic graphs.
Compared to state-of-the-art software frameworks on GPU, HiHGNN achieves an average 40.0× and 8.3× speedup as well as 99.59% and 99.74% energy reduction, respectively.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Runzhen Xue,... о arxiv.org 04-29-2024
https://arxiv.org/pdf/2307.12765.pdfГлибші Запити