The content describes the design and implementation of a SOT-MRAM-based PBNN system for efficient and noise-tolerant neural network computing. Key highlights:
The PBNN algorithm encodes random binary bits as the weight matrix, where the probabilistic vector-matrix multiplication (PVMM) output follows a normal distribution. This allows for preserving more input details with limited hardware resources compared to traditional binary neural networks.
The SOT-MRAM device exhibits controllable switching probability characteristics, enabling the generation of the random weight matrix. The proposed compute-in-memory (CIM) architecture allows for concurrent PVMM and binarization operations.
Simulation results show the PBNN system achieves 97.78% classification accuracy on the MNIST dataset with 10 sampling cycles, while reducing the number of bit-level computations by 6.9x compared to a full-precision LeNet-5 network. The PBNN also exhibits high noise-tolerance, maintaining over 90% accuracy even with 50% weight variation.
The hardware implementation details, including the SOT-MRAM device characterization, CIM circuit design, and end-to-end system simulation, are provided. The analysis demonstrates the optimal trade-off between accuracy, sampling cycles, and power consumption.
In summary, the SOT-MRAM-based PBNN system presents a compelling framework for designing reliable and efficient neural networks tailored to low-power edge computing applications.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문