Large Language Models (LLMs) are evaluated in scientific literature analysis through the SciAssess benchmark, focusing on memorization, comprehension, and analysis abilities.