Grunnleggende konsepter
Optimizing deep neural network architectures for resource-constrained edge environments while maintaining high accuracy.
Sammendrag
This paper proposes optimizing Deep Neural Networks (DNNs) to improve hardware utilization and enable on-device training in resource-constrained edge environments. Efficient parameter reduction strategies are implemented on Xception to reduce model size without sacrificing accuracy, decreasing memory usage during training. Two experiments, Caltech-101 image classification and PCB defect detection, show the optimized model outperforms Xception and lightweight models in test accuracy, memory usage, training, and inference times. Transfer learning benefits are observed with decreased memory usage. The optimized model architecture achieves Pareto optimality by balancing accuracy and low memory utilization objectives.
Statistikk
Our model has a better test accuracy of 76.21% compared to Xception's 75.89%.
Average memory usage of our model is 847.9MB compared to Xception's 874.6MB.
MobileNetV2 has the least average memory usage at 849.4MB.
Our model has the best test accuracy of 90.30% in PCB defect detection compared to Xception's 88.10%.
Sitater
"Our optimized model architecture satisfies both accuracy and low memory utilization objectives."
"Transfer learning shows benefits with decreased memory usage."
"The results demonstrate improved performance over original Xception and lightweight models."