Core Concepts
Constructing diverse weak models and selecting the most suitable one for classification improves DNN inference on microcontrollers.
Abstract
The paper introduces DiTMoS, a novel DNN training and inference framework focusing on model diversity.
DiTMoS utilizes a selector-classifiers architecture to improve accuracy by selecting the best classifier for each input sample.
Strategies include diverse training data splitting, adversarial selector-classifiers training, and heterogeneous feature aggregation.
Experiment results show up to 13.4% accuracy improvement compared to baselines across three datasets.
An ablation study confirms the effectiveness of key components in DiTMoS.
Stats
現在の方法論は、大きな正確なDNNモデルを小さなモデルに圧縮することに焦点を当てています。
DiTMoSは、弱いが多様なモデルを構築し、分類に最適なものを選択することで、マイクロコントローラー上のDNN推論を改善します。
Quotes
"DiTMoS achieves up to 13.4% accuracy improvement compared to the best baseline."
"We propose DiTMoS, a hierarchical selector-classifiers architecture, where the selector routes each input sample to the appropriate classifier for classification."