Основні поняття
Automated tool MATADOR optimizes Tsetlin Machine models for efficient SoC-FPGA inference at the edge.
Анотація
The paper introduces MATADOR, an automated boolean-to-silicon tool for optimizing Tsetlin Machine models on SoC-FPGAs. It addresses the challenges of ML inference at the edge by leveraging logic propositions and sparsity in TM models. MATADOR accelerators outperform state-of-the-art implementations in speed, resource efficiency, and power consumption. The design methodology focuses on bandwidth-driven data partitioning and hard-coded clause blocks to enhance compute unit efficiency. Through automation, MATADOR streamlines model training, RTL design generation, synthesis, and deployment onto SoC-FPGAs. Evaluation against FINN flow demonstrates superior performance across multiple datasets.
Статистика
MATADOR accelerator designs are up to 13.4x faster than Quantized and Binary Deep Neural Network implementations.
MATADOR accelerators are up to 7x more resource frugal compared to state-of-the-art alternatives.
MATADOR designs are up to 2x more power efficient than other implementations.
Цитати
"The benefits of MATADOR based accelerators arise from three interlinked points."
"MATADOR offers a middle ground between speed and logic footprint compared to BNN implementations."
"MATADOR leverages logic sharing between HCB blocks for enhanced efficiency."