Bibliographic Information: Beaglehole, D., Mitliagkas, I., & Agarwala, A. (2024). Feature learning as alignment: A structural property of gradient descent in non-linear neural networks. Transactions on Machine Learning Research.
Research Objective: This paper investigates the mechanism behind the Neural Feature Ansatz (NFA), aiming to explain the observed correlation between the Neural Feature Matrix (NFM) and the Average Gradient Outer Product (AGOP) in trained neural networks.
Methodology: The authors analyze the NFA by decomposing the AGOP and relating it to the pre-activation tangent kernel (PTK). They introduce the centered NFC (C-NFC) to isolate the alignment between weight changes and the PTK. The authors theoretically analyze the C-NFC dynamics under gradient flow, particularly at early training times, using high-dimensional settings like uniform data on the sphere and the linear co-scaling regime. They also propose Speed Limited Optimization (SLO), a layer-wise gradient normalization scheme, to enhance the C-NFC and promote NFA.
Key Findings: The study reveals that the NFA arises from the alignment between the left singular structure of weight matrices and the pre-activation tangent features at each layer. This alignment is driven by the interaction of weight changes induced by Stochastic Gradient Descent (SGD) with pre-activation features. The C-NFC, a metric quantifying this alignment, is found to be high at early training stages and largely determines the final UC-NFC. The authors demonstrate that manipulating the data distribution can predictably alter the C-NFC.
Main Conclusions: The research establishes the NFA as a structural property of gradient descent in neural networks. It highlights the role of weight-PTK alignment in feature learning and provides a theoretical framework for understanding the emergence of the NFA. The proposed SLO method demonstrates the potential for designing optimization techniques that explicitly promote feature learning by maximizing the C-NFC.
Significance: This work significantly contributes to the theoretical understanding of feature learning in neural networks. By elucidating the mechanism behind the NFA, it offers valuable insights into the inner workings of deep learning models.
Limitations and Future Research: The study primarily focuses on fully connected networks. Further research could explore the applicability of these findings to other architectures like convolutional neural networks and recurrent neural networks. Investigating the interplay between C-NFC and generalization error, and extending the analysis beyond early training dynamics are promising avenues for future work.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問