FLOWERFORMER: Empowering Neural Architecture Encoding with Flow-aware Graph Transformer
Core Concepts
FLOWERFORMER introduces a powerful graph transformer that incorporates information flows within neural architectures, outperforming existing methods in various domains.
Abstract
The success of neural network architecture depends on specific tasks and datasets.
Efforts have been made to predict performances without full training.
Graph-based methods are effective for representation learning.
FLOWERFORMER utilizes bidirectional message passing and global attention for enhanced representation learning.
Extensive experiments show the superiority of FLOWERFORMER over existing methods.
It excels in computer vision, graph neural networks, and auto speech recognition models.
FlowerFormer
Stats
Neural architecture encoding has gained considerable attention due to its significant downstream tasks. (Abstract)
FLOWERFORMER outperforms six baseline architectures by a substantial margin across three benchmark datasets in the computer vision domain. (Introduction)
FLOWERFORMER achieves performance gains of up to 4.41% in Kendall’s Tau over baseline models for graph neural networks and auto speech recognition architectures. (Contributions)
Quotes
"FLOWERFORMER consists of two key components: bidirectional asynchronous message passing and global attention built on flow-based masking."
"Our extensive experiments demonstrate the superiority of FLOWERFORMER over existing neural encoding methods."