Understanding the temporal regularities of information consumption on Wikipedia reveals distinct patterns based on topics, access methods, and user countries.
Large Language Models (LLMs) tend to consider Information Hazards less harmful, highlighting a critical security concern and the need for improved AI safety measures.
The author explores the Mutual Reinforcement Effect (MRE) theory through information flow analysis, demonstrating its impact on text classification tasks.
The author explores the challenges of pre-training Graph Neural Networks for context-based fake news detection, highlighting the lack of significant improvements with current strategies due to resource limitations.