Scaling Biological Representation Learning for Cell Microscopy Using a 1.9 Billion-Parameter Vision Transformer
This paper introduces MAE-G/8, a 1.9 billion-parameter Vision Transformer trained on a curated dataset of 16 million cell microscopy images, demonstrating significant improvements in biological representation learning for cell microscopy by achieving state-of-the-art results in replicate consistency and biological recall of gene relationships.