Efficient document embeddings are crucial for NLP tasks, and self-contrastive learning with Bregman divergence enhances quality representations for long documents.