核心概念
Catastrophic overfitting can be harnessed to enhance model performance by leveraging feature activation differences and introducing random noise.
統計資料
"CO occurs with salient feature activation differences."
"Models trained stably with regularization terms exhibit superior performance."
"Adding random noise to inputs during evaluation helps CO-affected models achieve optimal accuracy."
引述
"CO can be attributed to the feature coverage induced by specific pathways."
"Models suffering from CO can attain optimal classification accuracy by adding random noise."