This paper proposes a novel Reinforcement Learning (RL)-based approach to the problem of Neural Architecture Search (NAS). The key idea is to frame NAS as a graph search problem, where each node represents a neural network architecture and edges represent relations between architectures. The RL agent is then trained to navigate this graph and find high-performing architectures.
The authors evaluate their RL-based NAS agent on two established benchmarks: NAS-Bench-101 and NAS-Bench-301. They compare the performance of their agent against several strong baselines, including random search, random walks, and local search.
On NAS-Bench-101, the authors find that their RL agent displays strong scalability, being able to efficiently explore the search space and find high-performing architectures, especially for low query budgets. However, it exhibits limited robustness to hyperparameter changes compared to local search.
On the larger and more complex NAS-Bench-301 benchmark, the authors show that their RL agent is able to outperform the baselines, demonstrating its ability to effectively navigate large search spaces. This highlights the potential of their approach for practical NAS applications.
The key contributions of this work are:
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Amber Cassim... at arxiv.org 10-03-2024
https://arxiv.org/pdf/2410.01431.pdfDeeper Inquiries