Simultaneous Localisation and Mapping for 3D Pedestrian Navigation based on FootSLAM using Inertial Sensors

(Mobile Multimediatechnologien)
Externe Arbeit
Betreuer: Anas Al-Nuaimi
The visions towards ubiquitous personal navigation is that everybody can have access to high precision positioning services in all sorts of environments. Travellers, those with special needs such as the blind, rescue personnel, museum visitors, airport staff, doctors and nurses in hospitals could all benefit greatly from the ability to localise themselves in the areas they are working in or travelling through. Whilst this is quite easy in outdoor environments with the aid of satellite navigation systems such as GPS, this is much more challenging in indoor environments where it is difficult to receive the satellite signal well enough to allow meter-level positioning accuracy. To address this, an approach has been proposed that uses more than just one system or sensor to derive the location. This is known as sensor fusion. A particularly useful sensor is the inertial sensor consisting of a number of miniature electronic accelerometers and integrated gyroscopes (such as those found in modern Smart-Phones). Having information about walls and other obstacles allows modern probabilistic estimation algorithms such as “particle filters” to restrain estimation hypotheses to the correct areas. Within a typical office building we can achieve positioning accuracy within a few meters using no external signal or system. A problem remains, though: How do we obtain an accurate building plan? The DLR has recently developed a technique called “FootSLAM” that processes raw sensor data and can estimate the building map using only this data. They have successfully demonstrated this for a single user data set, and current work is on how to merge data sets and maps from many such walks. Our approach segments the 2D space into hexagons of around 1 meter diameter and this scale works well for many office environments. However, a system is envisaged that can automatically adapt the size of the FootSLAM features to the complexity of the environment and use triangular shapes of different sizes that can merge or divide to produce a fractal map. Also, moving from 2D to 3D building structures is planned in this thesis. The most important questions to be answered in this thesis are: How do we extend FootSLAM from 2D to a 3D representation of the world? How do we automate the process of self-scaling the fractal FootSLAM map? What information theoretic and probabilistic measures are helpful to control this? To answer these and other questions the student should design and implement a prototype implementation in java, building on our current platform, collect sensor data in real environments such as multi-storey public buildings or offices and evaluate the data fusion performance and the accuracy of the resulting maps. This concrete Master thesis topic is related to a number of Masters Theses conducted in 2010/11 in cooperation between DLR and UMA.
Keywords: Sensor Fusion