Huang, Guangming and Li, Yingya and Jameel, Shoaib and Long, Yunfei and Papanastasiou, Giorgos Papanastasiou (2024) From Explainable to lnterpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality? Computational and Structural Biotechnology Journal, 24. pp. 362-373. DOI https://doi.org/10.1016/j.csbj.2024.05.004
Huang, Guangming and Li, Yingya and Jameel, Shoaib and Long, Yunfei and Papanastasiou, Giorgos Papanastasiou (2024) From Explainable to lnterpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality? Computational and Structural Biotechnology Journal, 24. pp. 362-373. DOI https://doi.org/10.1016/j.csbj.2024.05.004
Huang, Guangming and Li, Yingya and Jameel, Shoaib and Long, Yunfei and Papanastasiou, Giorgos Papanastasiou (2024) From Explainable to lnterpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality? Computational and Structural Biotechnology Journal, 24. pp. 362-373. DOI https://doi.org/10.1016/j.csbj.2024.05.004
Abstract
Deep learning (DL) has substantially enhanced natural language processing (NLP) in healthcare research. However, the increasing complexity of DL-based NLP necessitates transparent model interpretability, or at least explainability, for reliable decision-making. This work presents a thorough scoping review of explainable and interpretable DL in healthcare NLP. The term “eXplainable and Interpretable Artificial Intelligence” (XIAI) is introduced to distinguish XAI from IAI. Different models are further categorized based on their functionality (model-, input-, output-based) and scope (local, global). Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique. The use of IAI is growing, distinguishing it from XAI. The major challenges identified are that most XIAI does not explore “global” modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks. One important opportunity is to use attention mechanisms to enhance multi-modal XIAI for personalized medicine. Additionally, combining DL with causal logic holds promise. Our discussion encourages the integration of XIAI in Large Language Models (LLMs) and domain-specific smaller models. In conclusion, XIAI adoption in healthcare requires dedicated in-house expertise. Collaboration with domain experts, end-users, and policymakers can lead to ready-to-use XIAI methods across NLP and medical tasks. While challenges exist, XIAI techniques offer a valuable foundation for interpretable NLP algorithms in healthcare.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | deep learning; explainable; healthcare; interpretable; NLP |
Subjects: | Z Bibliography. Library Science. Information Resources > ZZ OA Fund (articles) |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 16 May 2024 10:05 |
Last Modified: | 30 Oct 2024 16:42 |
URI: | http://repository.essex.ac.uk/id/eprint/38330 |
Available files
Filename: 1-s2.0-S2001037024001508-main.pdf
Licence: Creative Commons: Attribution 4.0