Large language models generate text with significantly fewer grounding acts compared to humans, indicating a fundamental gap in how they establish common ground.