Grainge, Oliver and Milford, Michael and Bodala, Indu and Ramchurn, Sarvapali D and Ehsan, Shoaib (2025) Structured Pruning for Efficient Visual Place Recognition. IEEE Robotics and Automation Letters, 10 (2). pp. 2024-2031. DOI https://doi.org/10.1109/lra.2024.3523222
Grainge, Oliver and Milford, Michael and Bodala, Indu and Ramchurn, Sarvapali D and Ehsan, Shoaib (2025) Structured Pruning for Efficient Visual Place Recognition. IEEE Robotics and Automation Letters, 10 (2). pp. 2024-2031. DOI https://doi.org/10.1109/lra.2024.3523222
Grainge, Oliver and Milford, Michael and Bodala, Indu and Ramchurn, Sarvapali D and Ehsan, Shoaib (2025) Structured Pruning for Efficient Visual Place Recognition. IEEE Robotics and Automation Letters, 10 (2). pp. 2024-2031. DOI https://doi.org/10.1109/lra.2024.3523222
Abstract
Visual Place Recognition (VPR) is fundamental for the global re-localization of robots and devices, enabling them to recognize previously visited locations based on visual inputs. This capability is crucial for maintaining accurate mapping and localization over large areas. Given that VPR methods need to operate in real-time on embedded systems, it is critical to optimize these systems for minimal resource consumption. While the most efficient VPR approaches employ standard convolutional backbones with fixed descriptor dimensions, these often lead to redundancy in the embedding space as well as in the network architecture. Our work introduces a novel structured pruning method, to not only streamline common VPR architectures but also to strategically remove redundancies within the feature embedding space. This dual focus significantly enhances the efficiency of the system, reducing both map and model memory requirements and decreasing feature extraction and retrieval latencies. Our approach has reduced memory usage and latency by 21% and 16%, respectively, across models, while minimally impacting recall@1 accuracy by less than 1%. This significant improvement enhances real-time applications on edge devices with negligible accuracy loss.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Accuracy; Computational modeling; Convolutional neural networks; Feature extraction; Localization; Memory management; Real-time systems; representation learning; Robustness; Termination of employment; vision-based navigation; Visual place recognition; Visualization |
| Subjects: | Z Bibliography. Library Science. Information Resources > ZR Rights Retention |
| Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
| SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
| Depositing User: | Unnamed user with email elements@essex.ac.uk |
| Date Deposited: | 31 Mar 2026 10:26 |
| Last Modified: | 31 Mar 2026 10:41 |
| URI: | http://repository.essex.ac.uk/id/eprint/42049 |
Available files
Filename: 2409.07834v1.pdf
Licence: Creative Commons: Attribution 4.0
Filename: Developmental Science - 2025 - - Correction to Visualizing the Invisible Tie Linking Parent Child Neural Synchrony to.pdf
Description: Correction
Licence: Creative Commons: Attribution 4.0