2025, Vol. 6, Issue 2, Part B
Comprehensive, context-aware, multi-layered security framework for mitigating prompt injection attacks in large language models
Author(s): Hasan Jameel Azooz, Ahmed Hameed shakir and Barakat Saad Ibrahim
Abstract: Large language models (LLMs) like GPT-3.5 and LLaMA have transformed natural language processing; but, their sensitivity to quick injection attacks where adversarially created inputs overcome limitations or extract sensitive data remains a serious danger. This study proposes a complete, multi-layered security system with real-time output monitoring, strong data protection, dynamic input validation, safe prompt design, and an adaptive feedback loop. The foundation of the approach is the new Context-Aware Prompt Security Scoring System (CA-PSSS), which uses context-specific characteristics to estimate prompt risk. Using GPT-3.5 and LLaMA-7B, evaluated on a varied dataset of 300 prompts (150 benign, 150 adversarial) our framework obtained a detection rate of 98.3%, a false positive rate of 2.7%, and an AUC-ROC of 0.92, with an average latency of 0.10 seconds per.
DOI: 10.33545/27076571.2025.v6.i2b.192Pages: 148-154 | Views: 44 | Downloads: 24Download Full Article: Click Here
How to cite this article:
Hasan Jameel Azooz, Ahmed Hameed shakir, Barakat Saad Ibrahim.
Comprehensive, context-aware, multi-layered security framework for mitigating prompt injection attacks in large language models. Int J Comput Artif Intell 2025;6(2):148-154. DOI:
10.33545/27076571.2025.v6.i2b.192