แนวคิดหลัก
A novel multi-expert LLM architecture (MEV-LLM) that integrates multiple LLMs, each fine-tuned on a dataset categorized by design complexity, to improve the quality of generated Verilog code.
บทคัดย่อ
The paper introduces a multi-expert LLM architecture (MEV-LLM) for Verilog code generation. The key aspects are:
MEV-LLM integrates multiple LLMs, each fine-tuned on a dataset categorized by design complexity level (basic, intermediate, advanced, expert). This allows more targeted learning for each complexity level.
A complexity classifier LLM is used to first determine the complexity level of the input design, and then the appropriate expert LLM is selected to generate the Verilog code.
A diverse dataset is developed, with each entry annotated with fine-grained descriptions and coarse-grained complexity labels, to facilitate effective fine-tuning of the expert models.
Experiments show that the MEV-LLM approach improves Verilog code generation by up to 23.9% in the pass@k metric compared to state-of-the-art approaches like CodeGen-Verilog and GEMMA.
The quality of the dataset is crucial, as experiments with an erroneous dataset show a significant drop in performance.
The proposed MEV-LLM architecture and the categorized dataset represent a significant advancement in automating hardware design through machine learning.
สถิติ
The percentage of generated Verilog outputs that are syntactically and functionally correct improves by up to 23.9% using the pass@k metric compared to state-of-the-art approaches.
คำพูด
"The proposed multi-expert LLM architecture is depicted in Fig. 1."
"Empirical evidence from experiments highlights notable improvements in terms of the percentage of generated Verilog outputs that are syntactically and functionally correct."