Temporal Masked Autoencoders (T-MAE) improve representation learning in sparse point clouds by incorporating historical frames and leveraging self-supervised pre-training.