Huang, Guangming (2025) A study on deep learning and explainable & interpretable AI: From general domain to healthcare. Doctoral thesis, University of Essex. DOI https://doi.org/10.5526/ERR-00041534
Huang, Guangming (2025) A study on deep learning and explainable & interpretable AI: From general domain to healthcare. Doctoral thesis, University of Essex. DOI https://doi.org/10.5526/ERR-00041534
Huang, Guangming (2025) A study on deep learning and explainable & interpretable AI: From general domain to healthcare. Doctoral thesis, University of Essex. DOI https://doi.org/10.5526/ERR-00041534
Abstract
Deep learning has witnessed an unprecedented evolution over the past decade, transforming from theoretical concepts into practical applications that permeate numerous domains of human activity. The exponential growth in computational power, availability of large-scale data, and advancements in neural network architectures have collectively facilitated the development of increasingly sophisticated deep learning models (e.g., large language models, LLMs) with performance levels that often surpass human capabilities in specific tasks. Despite these advancements, the deployment of deep learning models for healthcare presents substantial methodological and paradigmatic challenges. The transition from general to healthcare-specific contexts requires addressing fundamental differences in (i) representation learning, (ii) domain knowledge, (iii) data characteristics and (iv) explainability and interpretability. To address these issues, this study aims to systematically investigate the methodological and paradigmatic transition of deep learning and explainable & interpretable AI from general domain to healthcare. Our contributions are in following key areas: (i) we introduce multi-label relations in multi-label supervised contrastive learning (MSCL) and propose a novel contrastive loss function, termed Similarity-Dissimilarity Loss, which dynamically re-weights based on the computed similarity and dissimilarity factors between positive samples and anchors, applying for areas from multi-label classification to automated medical coding; (ii) We propose a Prompting Explicit and Implicit knowledge (PEI) framework for multi-hop question answering (QA) in biomedical domains, which employs CoT prompt-based learning to bridge explicit and implicit knowledge, aligning with human reading comprehension; (iii) we introduce a lexical-based imbalanced data augmentation (LIDA) for mental health moderation, which an easy-to-implement and interpretable DA method that strategically leverages sensitive lexicons by incorporating them into negative samples to transform these instances into positive examples. Through rigorous theoretical analyses and extensive experimental validation across multiple domains, this thesis contributes novel methodologies that enhance the performance, interpretability, and clinical applicability of deep learning methods from general domain to healthcare.
Item Type: | Thesis (Doctoral) |
---|---|
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
Depositing User: | Guangming Huang |
Date Deposited: | 03 Sep 2025 10:57 |
Last Modified: | 03 Sep 2025 10:57 |
URI: | http://repository.essex.ac.uk/id/eprint/41534 |
Available files
Filename: PhD Thesis Final.pdf