Exploring Neural Network Decision Making: Extended Relevance Propagation and Beyond

Main Article Content

Rabinder Kr. Prasad, Moirangthem Tiken Singh, Chandan Kalita, Sikdar Md S. Askari, Bikramjit Choudhury

Abstract

This paper introduces Extended Relevance Propagation (ERP), a novel approach designed to enhance the explainability of neural networks. The effectiveness of ERP is evaluated using fidelity-based metrics, benchmarked against established interpretability methods such as Layer-wise Relevance Propagation (LRP) and SHapley Additive exPlanations (SHAP). The ERP model demonstrates superior stability and robustness, producing consistent and trustworthy explanations even under input perturbations. While SHAP delivers highly detailed explanations, it is more sensitive to input changes, whereas LRP offers a balance between interpretative depth and stability. All three methods generate heatmaps that visually emphasize key features influencing model decisions, thereby enhancing transparency and fostering trust. Robustness analyses further validate ERP’s high fidelity, underscoring its suitability for applications that demand reliable and interpretable models.

Article Details

Section
Articles