A Study on Self-Uncertainty Detection in AI Systems Using Unsupervised Learning Techniques

Authors

  • Alisha Jabeen Assistant Professor, Department of Computer Science and Applications, Yenepoya Deemed to be University, Bangalore, Karnataka, India Author
  • Ansif Azeez BCA (AI, ML and Robotics), Yenepoya Deemed to be University, Bangalore, Karnataka, India. Author
  • Abhinav S BCA (AI, ML and Robotics), Yenepoya Deemed to be University, Bangalore, Karnataka, India. Author
  • Jenas Renjan Parakkal BCA (AI, ML and Robotics), Yenepoya Deemed to be University, Bangalore, Karnataka, India. Author
  • Adhwaith K G BCA (AI, ML and Robotics), Yenepoya Deemed to be University, Bangalore, Karnataka, India. Author
  • Anandha Krishnan BCA (AI, ML and Robotics), Yenepoya Deemed to be University, Bangalore, Karnataka, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0176

Keywords:

Artificial Intelligence Reliability, Self-Uncertainty Detection, Unsupervised Learning, Isolation Forest, Anomaly Detection, Credit Card Fraud Detection

Abstract

Artificial intelligence (AI) systems are being used more and more in practical applications like autonomous systems, healthcare diagnostics, and financial fraud detection. Many AI models are highly predictive, but they are unable to identify situations in which their forecasts could not be accurate. When the input data deviates from the distribution shown during training, this restriction may result in overconfident conclusions. This work looks at how we can detect uncertainty in data with unsupervised learning method. Here, Isolation Forest algorithm is used to identify unusual data points by providing the scores of anomalies. These scores basically tell how and much a transaction is different from normal behavior. For testing Credit Card Fraud Detection dataset was used, which has around 284,807 transactions with 30 features. Before using the model, the data was scaled to make all values are in a similar range. To see the output in a better way, PCA and the histogram plot were used. These helped in seeing how the scores of the anomalies are distributed and where the unusual points are located. From the results, it can be noticed that such methods are able to identify data points that don't follow the usual pattern. These points may represent cases where the predictions of the model are not very reliable. So, this approach can be useful to improve how much we can trust the AI systems, especially when dealing with real-world data.

Downloads

Download data is not yet available.

Downloads

Published

2026-05-03