Bibliographic Information: Sallami, D., & A¨ımeur, E. (2024). FNDEX: Fake News and Doxxing Detection with Explainable AI. arXiv preprint arXiv:2410.22390v1.
Research Objective: This paper introduces FNDEX, a novel system designed to address the growing concerns of fake news and doxxing in online environments. The research aims to develop effective detection strategies for both threats while incorporating anonymization techniques to safeguard individual privacy and utilizing Explainable AI (XAI) for transparency and accountability.
Methodology: The FNDEX system utilizes three distinct transformer models (BERT, DistilBERT, and RoBERTa) to detect fake news and doxxing. A three-step anonymization process based on pattern recognition and replacement is employed to protect personally identifiable information (PII). The system incorporates LIME (Local Interpretable Model-Agnostic Explanations) to provide insights into the decision-making process of the detection models. The researchers evaluated FNDEX's performance on a Kaggle dataset for fake news detection and a dataset of tweets for doxxing detection.
Key Findings: The study found that transformer-based models, particularly RoBERTa, significantly outperformed baseline models in both fake news and doxxing detection tasks. The anonymization process effectively masked PII while preserving the contextual integrity and utility of the text. The use of LIME provided clear and interpretable explanations for the model's predictions, enhancing transparency and user trust.
Main Conclusions: The research concludes that FNDEX offers a promising approach to combatting fake news and doxxing by combining accurate detection, effective anonymization, and explainable AI. The system's ability to protect privacy while maintaining data utility makes it a valuable tool for fostering a safer and more trustworthy online environment.
Significance: This research makes a significant contribution to the field of online safety and security by addressing the interconnected challenges of fake news and doxxing. The proposed framework, with its focus on explainability and privacy preservation, offers a practical and ethical approach to mitigating these threats.
Limitations and Future Research: The study acknowledges limitations regarding the availability of publicly accessible datasets for doxxing detection. Future research could explore the development of synthetic datasets or collaborate with social media platforms to access anonymized data for training and evaluation. Additionally, exploring the integration of other XAI methods beyond LIME could further enhance the system's transparency and interpretability.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor