Efficient Relative Positional Encodings in Transformers via Learned Fourier Transforms
FourierLearner-Transformers (FLTs) efficiently incorporate a wide range of relative positional encoding mechanisms into Transformer models, enabling linear-complexity attention while maintaining strong performance across diverse tasks and data modalities.