Karthikeyan, Srinidhi and Garcia Seco De Herrera, Alba and Doctor, Faiyaz and Mirza, Asim (2022) An OCR Post-correction Approach using Deep Learning for Processing Medical Reports. IEEE Transactions on Circuits and Systems for Video Technology, 32 (5). pp. 2574-2581. DOI https://doi.org/10.1109/TCSVT.2021.3087641
Karthikeyan, Srinidhi and Garcia Seco De Herrera, Alba and Doctor, Faiyaz and Mirza, Asim (2022) An OCR Post-correction Approach using Deep Learning for Processing Medical Reports. IEEE Transactions on Circuits and Systems for Video Technology, 32 (5). pp. 2574-2581. DOI https://doi.org/10.1109/TCSVT.2021.3087641
Karthikeyan, Srinidhi and Garcia Seco De Herrera, Alba and Doctor, Faiyaz and Mirza, Asim (2022) An OCR Post-correction Approach using Deep Learning for Processing Medical Reports. IEEE Transactions on Circuits and Systems for Video Technology, 32 (5). pp. 2574-2581. DOI https://doi.org/10.1109/TCSVT.2021.3087641
Abstract
According to a recent Deloitte study, the COVID-19 pandemic continues to place a huge strain on the global health care sector. Covid-19 has also catalysed digital transformation across the sector for improving operational efficiencies. As a result, the amount of digitally stored patient data such as discharge letters, scan images, test results or free text entries by doctors has grown significantly. In 2020, 2314 exabytes of medical data was generated globally. This medical data does not conform to a generic structure and is mostly in the form of unstructured digitally generated or scanned paper documents stored as part of a patient’s medical reports. This unstructured data is digitised using Optical Character Recognition (OCR) process. A key challenge here is that the accuracy of the OCR process varies due to the inability of current OCR engines to correctly transcribe scanned or handwritten documents in which text may be skewed, obscured or illegible. This is compounded by the fact that processed text is comprised of specific medical terminologies that do not necessarily form part of general language lexicons. The proposed work uses a deep neural network based self-supervised pre-training technique: Robustly Optimized Bidirectional Encoder Representations from Transformers (RoBERTa) that can learn to predict hidden (masked) sections of text to fill in the gaps of non-transcribable parts of the documents being processed. Evaluating the proposed method on domain-specific datasets which include real medical documents, shows a significantly reduced word error rate demonstrating the effectiveness of the approach.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Optical Character Recognition (OCR); Natural Language Processing (NLP); Robustly Optimized Bidirectional Encoder Representations from Transformers (RoBERTa); Medical documents |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 27 Jan 2022 10:35 |
Last Modified: | 30 Oct 2024 19:32 |
URI: | http://repository.essex.ac.uk/id/eprint/32069 |
Available files
Filename: TCSVT3087641.pdf