Singh, Sumit and Anisi, Mohammad Hossein and Jindal, Anish and Jarchi, Delaram (2024) Smart Multimodal In-Bed Pose Estimation Framework Incorporating Generative Adversarial Neural Network. IEEE Journal of Biomedical and Health Informatics, 28 (6). pp. 3379-3388. DOI https://doi.org/10.1109/jbhi.2024.3384453
Singh, Sumit and Anisi, Mohammad Hossein and Jindal, Anish and Jarchi, Delaram (2024) Smart Multimodal In-Bed Pose Estimation Framework Incorporating Generative Adversarial Neural Network. IEEE Journal of Biomedical and Health Informatics, 28 (6). pp. 3379-3388. DOI https://doi.org/10.1109/jbhi.2024.3384453
Singh, Sumit and Anisi, Mohammad Hossein and Jindal, Anish and Jarchi, Delaram (2024) Smart Multimodal In-Bed Pose Estimation Framework Incorporating Generative Adversarial Neural Network. IEEE Journal of Biomedical and Health Informatics, 28 (6). pp. 3379-3388. DOI https://doi.org/10.1109/jbhi.2024.3384453
Abstract
Monitoring in-bed pose estimation based on the Internet of Medical Things (IoMT) and ambient technology has a significant impact on many applications such as sleep-related disorders including obstructive sleep apnea syndrome, assessment of sleep quality, and health risk of pressure ulcers. In this research, a new multimodal in-bed pose estimation has been proposed using a deep learning framework. The Simultaneously-collected multimodal Lying Pose (SLP) dataset has been used for performance evaluation of the proposed framework where two modalities including long wave infrared (LWIR) and depth images are used to train the proposed model. The main contribution of this research is the feature fusion network and the use of a generative model to generate RGB images having similar poses to other modalities (LWIR/depth). The inclusion of a generative model helps to improve the overall accuracy of the pose estimation algorithm. Moreover, the method can be generalized for situations to recover human pose both in home and hospital settings under various cover thickness levels. The proposed model is compared with other fusion-based models and shows an improved performance of 97.8% at PCKh@0.5. In addition, performance has been evaluated for different cover conditions, and under home and hospital environments which present improvements using our proposed model.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | AI; depth; Generative adversarial neural network; Internet of Medical Things; LWIR; SLP |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 16 Apr 2024 16:01 |
Last Modified: | 19 Jun 2024 09:03 |
URI: | http://repository.essex.ac.uk/id/eprint/38174 |
Available files
Filename: JBHI_Main.pdf