2026, Vol. 8, Issue 2, Part A
Explainable AI (XAI): Bridging the gap between black-box models and human trust
Author(s): Punit Kumar Chaubey, Sunita, Rupendra Kumar, Gurpreet Kaur and Vibhanshu Tripathi
Abstract: Arguably, the most recent years have been characterized by unprecedented growth of the concept of Artificial Intelligence (AI) with the popularization of massive machine learning models and deep learning models. Despite the excellent predictive power of these models, transparency, accountability and trust have been raised due to the fact that these models are opaque. The result of this has led to the creation of Explainable Artificial Intelligence (XAI), a paradigm that seeks to offer AI systems meaning and comprehensibility to humans. The article is a review of theoretical foundations of the XAI, its techniques, and uses, and the significance of the concept in the environment of bridging the trust-black-box gap between human trust and black-box models. The paper gives an in-depth taxonomy of XAI tools, such as intrinsic interpretability and post-hoc explanations, and some of the most prominent methods, such as LIME, SHAP, Grad-CAM, and counterfactual explanations. In addition, it also discusses such matters as model fidelity, robustness and ethical issues. XAI contribution to such significant spheres as healthcare, financial, and security is also addressed. Finally, the paper offers the future research directions, which are hybrid models, regulatory frameworks, and human-centered AI design. The findings point to the importance of XAI in ensuring transparency, equity, and reliability of the modern AI systems.
DOI: https://www.doi.org/10.33545/26633582.2026.v8.i2a.270Pages: 01-09 | Views: 215 | Downloads: 49Download Full Article: Click Here
How to cite this article:
Punit Kumar Chaubey, Sunita, Rupendra Kumar, Gurpreet Kaur, Vibhanshu Tripathi.
Explainable AI (XAI): Bridging the gap between black-box models and human trust. Int J Eng Comput Sci 2026;8(2):01-09. DOI:
10.33545/26633582.2026.v8.i2a.270