Strengthening Malware Classifiers: A Robustness Analysis Against Evasion Attacks Using Adversarial Training

Authors

  • Santosh KC MCA Student, Department of Master of Computer Application, Dayananda Sagar College Of Engineering, Bengaluru – 560054, Karnataka, India. Author
  • Noor Fathima MCA Student, Department of Master of Computer Application, Dayananda Sagar College Of Engineering, Bengaluru – 560054, Karnataka, India. Author
  • Saurav Gupta MCA Student, Department of Master of Computer Application, Dayananda Sagar College Of Engineering, Bengaluru – 560054, Karnataka, India. Author
  • Dr Geetha Lakshmi N Assistant Professor, Department. of Master of Computer Application, Dayananda Sagar College Of Engineering, Bengaluru – 560054, Karnataka, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0282

Keywords:

Adversarial training, Cybersecurity, Evasion attacks, Malware classification, Robustness analysis.

Abstract

Antivirus and endpoint security tools now lean heavily on machine learning to detect malware that traditional signature systems cannot catch. Yet this shift introduces a fresh class of vulnerability: adversarial evasion attacks, which manipulate a malware sample just enough to slip past a trained classifier while keeping its malicious payload intact. This paper investigates how robust ML-based malware classifiers are against five representative evasion attacks and measures how much adversarial training (AT) can harden them. We study gradient-based white-box attacks FGSM and PGD alongside black-box approaches GAMMA, the Kreuk byte-injection method, and the sigma-binary technique applied over Android permission-vector and Windows PE raw-binary feature spaces. Four classifier families are compared: Linear SVM, Gradient Boosted Decision Trees, a Deep Neural Network, and MalConv. Findings show that adversarially trained models cut average evasion rates by 54–63 percentage points while losing fewer than two percentage points of natural accuracy. Deeper architectures benefit substantially more from AT than linear models. Robustness also carries over partially across attack types, suggesting AT hardens the decision boundary in a general rather than attack-specific way.

Downloads

Download data is not yet available.

Downloads

Published

2026-05-10