Red Paper
International Journal of Computing and Artificial Intelligence

Impact Factor (RJIF): 5.57, P-ISSN: 2707-6571, E-ISSN: 2707-658X
Printed Journal   |   Refereed Journal   |   Peer Reviewed Journal
Peer Reviewed Journal

2024, Vol. 5, Issue 2, Part A

Advancing cybersecurity: Robust defenses and transparent decision-making through adversarial training and interpretability techniques in artificial intelligence


Author(s): Mansoor Farooq, Mubashir Hassan Khan and Dr. Rafi A Khan

Abstract:
The escalating sophistication of cyber threats necessitates the integration of advanced technologies to fortify cybersecurity measures. This research paper explores the transformative impact of artificial intelligence (AI) and machine learning (ML) in the realm of cybersecurity, with a particular emphasis on adversarial training and interpretability techniques. The primary objectives of this study are to investigate the efficacy of Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Stochastic Gradient Descent (SGD) in enhancing the robustness of machine learning models against adversarial attacks. Additionally, the research delves into the interpretability aspects through the utilization of Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), examining their roles in providing transparent insights into model decisions.
The ML models are constructed with a focus on adversarial training algorithms, integrating FGSM, PGD, and SGD into the training pipeline. Furthermore, LIME and SHAP are applied to enhance model interpretability and facilitate a deeper understanding of model predictions. Results indicate significant improvements in model resilience against adversarial attacks and enhanced interpretability, contributing to the ongoing discourse on strengthening cybersecurity defenses.
This study's findings hold implications for the development of robust AI-driven cybersecurity systems, where adversarial training and interpretability techniques play pivotal roles in ensuring the reliability and transparency of machine learning models in the face of evolving cyber threats. The research lays a foundation for future investigations into innovative strategies for securing digital landscapes against adversarial exploits.


DOI: 10.33545/27076571.2024.v5.i2a.92

Pages: 17-27 | Views: 788 | Downloads: 231

Download Full Article: Click Here

International Journal of Computing and Artificial Intelligence
How to cite this article:
Mansoor Farooq, Mubashir Hassan Khan, Dr. Rafi A Khan. Advancing cybersecurity: Robust defenses and transparent decision-making through adversarial training and interpretability techniques in artificial intelligence. Int J Comput Artif Intell 2024;5(2):17-27. DOI: 10.33545/27076571.2024.v5.i2a.92
International Journal of Computing and Artificial Intelligence

International Journal of Computing and Artificial Intelligence

International Journal of Computing and Artificial Intelligence
Call for book chapter
Journals List Click Here Research Journals Research Journals