ADC-Less Hybrid Analog-Digital Compute in Memory Accelerator for Deep Learning Workloads
מושגי ליבה
ADC-Less Hybrid Analog-Digital CiM accelerator enhances efficiency and energy savings in deep learning workloads.
תקציר
Analog CiM accelerators are efficient for Deep Neural Networks (DNN).
ADCs introduce power and area overhead in CiM architecture.
Extreme low-precision quantization eliminates the need for ADCs.
HCiM proposes a hybrid approach with analog CiM crossbars and digital array.
Sparsity in ternary quantization leads to energy savings.
HCiM achieves significant energy reductions compared to baseline architectures.
Introduction
Analog CiM accelerators reduce data movement between memory and processing units.
Traditional hardware faces efficiency barriers like the "memory wall."
Background and Challenges
Analog CiM accelerators perform MVM operations using weight-stationary dataflow.
Partial-sum quantization reduces precision of partial sums and ADC requirements.
Algorithm Hardware Co-design
Quantization-aware training reduces complexity of scale factor processing.
HCiM architecture combines analog CiM crossbars with a digital array for efficient MVM operations.
Evaluation
Training ResNet models on CIFAR-10 dataset with varying precision levels.
HCiM shows lower energy consumption and latency compared to baseline architectures.
Conclusion
HCiM offers a novel approach to reduce ADC costs in analog CiM accelerators.
Sparsity control and clock gating contribute to energy savings without compromising accuracy.
התאם אישית סיכום
כתוב מחדש עם AI
צור ציטוטים
תרגם מקור
לשפה אחרת
צור מפת חשיבה
מתוכן המקור
עבור למקור
arxiv.org
HCiM
סטטיסטיקה
ADCs alone consume as much as 60% energy and occupy nearly 80% area in CiM accelerators.
HCiM achieves energy reductions up to 28× and 12× compared to analog CiM baseline architecture using 7-bit and 4-bit ADC, respectively.