Research Repository

Direct Visual and Inertial Odometry for Monocular Mobile Platforms

Gui, Jianjun (2018) Direct Visual and Inertial Odometry for Monocular Mobile Platforms. PhD thesis, University of Essex.

[img] Text
JianjunGui's Thesis.pdf
Restricted to Repository staff only until 28 September 2022.

Download (46MB) | Request a copy

Abstract

Nowadays visual and inertial information is readily available from small mobile platforms, such as quadcopters. However, due to the limitation of onboard resource and capability, it is still a challenge to developing localisation and mapping estimation algorithms for small size mobile platforms. Visual-based techniques for tracking or motion estimation related tasks have been developed abundantly, especially using interest points as features. However, such sparse feature-based methods are quickly getting divergence, due to noise, partial occlusion or light condition variation in views. Only in recent years, direct visual based approaches, which densely, semi-densely or statistically use pixel information reveal significant improvement in algorithm robustness and stability. On the other hand, inertial sensors measure the changes in angular velocity and linear acceleration, which can be further integrated to predict relative velocity, position and orientation for mobile platforms. In practical usage, the accumulated error from inertial sensors is often compensated by cameras, while the loss of agile egomotion from visual sensors can be compensated by inertial-based motion estimation. Based on the complementary nature of visual and inertial information, in this research, we focus on how to use the direct visual based approaches to providing location information through a monocular camera, while fusing with the inertial information to enhance the robustness and accuracy. The proposed algorithms can be applied to practical datasets which are collected from mobile platforms. Particularly, direct-based and mutual information based methods are explored in details. Two visual-inertial odometry algorithms are proposed in the framework of multi-state constraint Kalman filter. They are also tested with the real data from a flying robot in complex indoor and outdoor environments. The results show that the direct-based methods have the merits of robustness in image processing and accuracy in the case of moving along straight lines with a slight rotation. Furthermore, the visual and inertial fusion strategies are investigated to build their intrinsic links, then the improvement done by iterative steps in filtering propagation is proposed. As an addition, for experimental implementation, a self-made flying robot for data collection is also developed.

Item Type: Thesis (PhD)
Uncontrolled Keywords: Visual Inertial Odometry, SLAM, Localisation, Pose Estimation, Computer Vision, Sensor Fusion, Direct-based.
Subjects: Q Science > Q Science (General)
T Technology > T Technology (General)
Divisions: Faculty of Science and Health > Computer Science and Electronic Engineering, School of
Depositing User: Jianjun Gui
Date Deposited: 20 Mar 2018 15:30
Last Modified: 20 Mar 2018 15:30
URI: http://repository.essex.ac.uk/id/eprint/21726

Actions (login required)

View Item View Item