Sparse pre-training in biomedical language models enhances efficiency and accuracy, setting new benchmarks.