From Assistants to Adversaries: The Security Risk of Advanced AI

Authors

  • Anakha P P UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author
  • Dilsha M S UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author
  • Gopika K M UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author
  • Kavya M M UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author
  • Srithisha R S UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author
  • Vaishnavy K U UG - Little Flower College Autonomous Guruvayoor, Kerala, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2025.0490

Keywords:

Artificial Intelligence, Security Risks, AI Agents, Ethical Issues, Adversarial Attacks

Abstract

Artificial Intelligence (AI) has rapidly evolved from a helpful tool to a highly autonomous decision-making system. It can impact critical infrastructures, social systems, and economic processes. While advanced AI agents offer efficiency and innovation, they also bring significant security risks. This research examines the dual nature of AI, which acts as both an assistant and a potential adversary. We explore key vulnerabilities such as data poisoning, adversarial attacks, and system manipulation that can turn AI systems into security threats. We also examine ethical issues related to autonomy, accountability, and transparency. By showcasing real-world case studies and theoretical models, this work stresses the urgent need for strong safeguards, responsible governance, and security-aware AI design. The findings highlight that without proactive steps, AI could change from a trusted assistant to a powerful adversary, threatening the integrity of digital ecosystems.

Downloads

Download data is not yet available.

Downloads

Published

2025-10-24