Decreasing isotropy in contextualized language model representations tends to improve performance on downstream tasks, while increasing isotropy hampers performance.