Veritas Net: Hybrid Fake News Detection and Fact Verification System

Authors

  • Mr. S. Kumarakrishnan Associate Professor, Dept. Of CSE, Sri Manakula Vinayagar Engineering College, Puducherry, India Author
  • M Fazil Ahamed Ug Student, Dept. Of Cse, Sri Manakula Vinayagar Engineering College, Puducherry, India: Author
  • Reeman Infant D Ug Student, Dept. Of Cse, Sri Manakula Vinayagar Engineering College, Puducherry, India: Author
  • Karthikeyan C Ug Student, Dept. Of Cse, Sri Manakula Vinayagar Engineering College, Puducherry, India: Author
  • Sivanesan R Ug Student, Dept. Of Cse, Sri Manakula Vinayagar Engineering College, Puducherry, India: Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0107

Keywords:

Fake News Detection, VeritasNet, AutoML, Natural Language Processing, BERT, Fact Verification, Explainable AI

Abstract

The rapid growth of digital media and social networking platforms has significantly accelerated the spread of fake news, leading to misinformation that affects public opinion, social harmony, and decision-making processes. Traditional fake news detection approaches primarily rely on binary classification and handcrafted features, which are often insufficient to handle semantic complexity, partial truths, and evolving misinformation patterns. To address these challenges, this project proposes an intelligent fake news detection platform based on VeritasNet, integrated with Automated Machine Learning (AutoML) and fact verification mechanisms. The proposed system employs advanced Natural Language Processing (NLP) techniques for semantic understanding of news content. Transformer-based deep learning models such as BERT are utilized to extract rich contextual representations from text, which are further processed by VeritasNet to perform deep pattern learning and uncertainty-aware classification. AutoML is incorporated to automatically select optimal model architectures and tune hyperparameters, ensuring robust performance while minimizing manual intervention. In addition to content-based analysis, the system includes a fact verification module that extracts key claims from news articles and validates them against trusted fact-checking sources and knowledge bases. To enhance transparency and user trust, Explainable Artificial Intelligence (XAI) techniques such as SHAP, LIME, and attention visualization are employed to interpret model predictions. Experimental results demonstrate that the proposed approach achieves improved accuracy, balanced precision–recall performance, and reliable confidence estimation compared to conventional models.

Downloads

Download data is not yet available.

Downloads

Published

2026-04-06