Certain consecutive layers in large language models have minimal impact on hidden states, allowing for effective layer pruning without significant performance degradation.