This study evaluates the performance of ChatGPT, a large language model, in detecting various types of media bias, including racial bias, gender bias, cognitive bias, text-level context bias, hate speech, and fake news, and compares it to the performance of fine-tuned models such as BART, ConvBERT, and GPT-2.
Russian media outlets heavily utilize Telegram to spread content, impacting the flow of information in the Russian media ecosystem.
IndiTag is an innovative online media bias analysis system that leverages fine-grained indicators to dissect and annotate bias in digital content, promoting transparency and accountability in the digital media landscape.
The prevalence of machine-generated articles is increasing, with misinformation websites experiencing a significant rise, driven by the release of ChatGPT.
IndiTag offers a comprehensive platform for bias analysis in digital content, promoting transparency and accountability in media discourse.
Automated generation of out-of-context captions using conditional word tokens and visual input.
The author examines the inconsistent implementation of EU sanctions on Russian media, attributing it to a lack of clear guidance for technical enforcement by national authorities.
The author aims to quantify the impact of media messaging on Covid-19 mask-wearing beliefs by analyzing news stories and opinion formation models.
NOVA provides a platform for users to assess their personal beliefs on media bias through interactive visualizations, promoting transparency and self-assessment.
The authors propose a novel framework for evaluating the trustworthiness of online news publishers using social media interactions, aiming to streamline the process and provide nuanced insights. By leveraging user interactions, they aim to identify verifiable publishers and estimate trustworthiness automatically.