Exploring Neural Network Decision Making: Extended Relevance Propagation and Beyond
Main Article Content
Abstract
This paper introduces Extended Relevance Propagation (ERP), a novel approach designed to enhance the explainability of neural networks. The effectiveness of ERP is evaluated using fidelity-based metrics, benchmarked against established interpretability methods such as Layer-wise Relevance Propagation (LRP) and SHapley Additive exPlanations (SHAP). The ERP model demonstrates superior stability and robustness, producing consistent and trustworthy explanations even under input perturbations. While SHAP delivers highly detailed explanations, it is more sensitive to input changes, whereas LRP offers a balance between interpretative depth and stability. All three methods generate heatmaps that visually emphasize key features influencing model decisions, thereby enhancing transparency and fostering trust. Robustness analyses further validate ERP’s high fidelity, underscoring its suitability for applications that demand reliable and interpretable models.
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.