Core Concepts
Discovering new loss functions that outperform cross-entropy in large-scale CNNs.
Abstract
The article explores Neural Loss Function Evolution for large-scale image classifier convolutional neural networks. It introduces a new search space and surrogate function to find better loss functions than cross-entropy. After evolution and elimination protocols, three new loss functions, NeuroLoss1, NeuroLoss2, and NeuroLoss3, were discovered to outperform cross-entropy across various architectures and datasets.
Stats
"NeuroLoss1 achieved a mean test accuracy of 95.952%."
"NeuroLoss2 surpassed cross-entropy with a mean test accuracy of 95.906%."
"NeuroLoss3 showed a mean test accuracy of 95.816%."
Quotes
"We propose a derivative of the NASNet search space specifically for loss functions."
"Three new loss functions, called NeuroLoss1, NeuroLoss2, and NeuroLoss3 were discovered."
"The final loss functions were transferred across multiple architectures and datasets."