Hybrid Deep Learning Framework for Interpretable Healthcare Diagnostics Integrating Multi-Modal Data for Enhanced Trust and Accuracy
DOI:
https://doi.org/10.5281/zenodo.15069851Trefwoorden:
• Interpretable AI, • Hybrid Deep Learning, Healthcare Diagnostics, Explainability in AI, Grad-CAM Heatmaps, SHAP Feature Importance, Multi-Modal Data Integration, Disease Prediction, Ethical AI, Trustworthy Machine LearningSamenvatting
The growing use of artificial intelligence (AI) within healthcare demands models that boast both high-performance and interpretability. This study presents a hybrid deep learning framework that combines multi-modal data for precise disease predictions along with actionable and interpretable insights, which in turn can drastically enhance the quality of diagnosis. Through the integration of CNN and Transformer-based models, along with advanced feature fusion techniques, the comprehensive framework guarantees those whose predictive performance is optimal across a wide range of datasets. Additionally, employ explainability modules like Grad-CAM, SHAP which allows users to see why the model made a certain prediction with visualizations in a more interpretable manner like heatmaps or feature importance scores, thus increasing trust in the model. Experiments on public datasets (e.g., MIMIC-IV, ChestX-ray8, and COVID-19 CT) show better accuracy and higher explainability than traditional black-box models. This study forges a vital connection in the space of healthcare AI, stressing the importance of performance along with transparency to support the ethical and effective implementation of AI systems in the clinical environment.
Downloads
Gepubliceerd
Nummer
Sectie
Licentie
Copyright (c) 2025 International Journal of Technology

Dit artikel is gelicentieerd onder de Naamsvermelding-NietCommercieel-GeenAfgeleideWerken 4.0 Internationaal licentie.
All articles published in International Journal of Technology are licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). This license allows others to share, copy, distribute, and adapt the work for any purpose, even commercially, as long as appropriate credit is given to the original authors. Authors retain the copyright and agree to have their work published under this license, ensuring the broadest possible dissemination and reuse of their research.
For more information or licensing inquiries, contact mossdigital77@gmail.com.