The paper revives Densely Connected Convolutional Networks (DenseNets) and demonstrates their potential that was previously overlooked. Through a comprehensive pilot study, the authors validate that feature concatenation can surpass additive shortcuts used in prevalent architectures like ResNets.
The authors then modernize DenseNet with a more memory-efficient design, abandoning ineffective components and enhancing architectural and block designs, while preserving the essence of dense connectivity via concatenation. Their methodology, dubbed Revitalized DenseNet (RDNet), ultimately exceeds the performance of strong modern architectures like Swin Transformer, ConvNeXt, and DeiT-III on ImageNet-1K. RDNet also exhibits competitive performance on downstream tasks such as ADE20K semantic segmentation and COCO object detection/instance segmentation.
Notably, RDNet does not exhibit slowdown or degradation as the input size increases, unlike width-oriented networks that struggle with larger intermediate tensors. The authors provide empirical analyses that shed light on the unique benefits of concatenation over additive shortcuts.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Donghyun Kim... at arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19588.pdfDeeper Inquiries