Mohamed, Elhassan and Sirlantzis, Konstantinos and Howells, Gareth (2021) Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users. IEEE Access, 9. pp. 147914-147932. DOI https://doi.org/10.1109/access.2021.3123952
Mohamed, Elhassan and Sirlantzis, Konstantinos and Howells, Gareth (2021) Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users. IEEE Access, 9. pp. 147914-147932. DOI https://doi.org/10.1109/access.2021.3123952
Mohamed, Elhassan and Sirlantzis, Konstantinos and Howells, Gareth (2021) Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users. IEEE Access, 9. pp. 147914-147932. DOI https://doi.org/10.1109/access.2021.3123952
Abstract
Electrical Powered Wheelchair (EPW) users may find navigation through indoor and outdoor environments a significant challenge due to their disabilities. Moreover, they may suffer from near-sightedness or cognitive problems that limit their driving experience. Developing a system that can help EPW users to navigate safely by providing visual feedback and further assistance when needed can have a significant impact on the user’s wellbeing. This paper presents computer vision systems based on deep learning, with an architecture based on residual blocks that can semantically segment high-resolution images. The systems are modified versions of DeepLab version 3 plus that can process high-resolution input images. Besides, they can simultaneously process images from indoor and outdoor environments, which is challenging due to the difference in data distribution and context. The proposed systems replace the base network with a smaller one and modify the encoder-decoder architecture. Nevertheless, they produce high-quality outputs with fast inference speed compared to the systems with deeper base networks. Two datasets are used to train the semantic segmentation systems: an indoor application-based dataset that has been collected and annotated manually and an outdoor dataset to cover both environments. The user can toggle between the two individual systems depending on the situation. Moreover, we proposed shared systems that automatically use a specific semantic segmentation system depending on the pixels’ confidence scores. The annotated output scene is presented to the EPW user, which can aid with the user’s independent navigation. State-of-the-art semantic segmentation techniques are discussed and compared. Results show the ability of the proposed systems to detect objects with sharp edges and high accuracy for indoor and outdoor environments. The developed systems are deployed on a GPU based board and then integrated on an EPW for practical usage and evaluation.
Item Type: | Article |
---|---|
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 05 Jan 2024 17:11 |
Last Modified: | 05 Jan 2024 17:11 |
URI: | http://repository.essex.ac.uk/id/eprint/37304 |
Available files
Filename: Indoor_Outdoor_Semantic_Segmentation_Using_Deep_Learning_for_Visually_Impaired_Wheelchair_Users.pdf
Licence: Creative Commons: Attribution 4.0