Activity-Based Indoor Localization
This is a final research project for a wireless networking (and mobile computing) class. We proposed a method for indoor localization using activity recognition from motion data. We implemented a smartphone app that collects the user's motion data from the onboard accelerometer, gyroscope, and compass. This data is then sent to a server for processing and classification. The classifier decides which activity the person is engaged in (e.g. walking, sitting, etc.) and this information is used as a sensation for a particle filter based localization system. Combined with a simple odometry and turning detector, we were able to correctly localize users inside a university building using a simulated environment.
This video shows an example of our particle filter simulation:
In the video, we first manually move an artificial user (the green dot) across a floor map. The blue areas are hallways, the red areas are stairs, the cyan spots are doors, the yellow regions are "standing" locations, and the magenta regions are "sitting" locations. The region that the person is currently over is saved to a file, along with their movement speed and turning rate. In the second part, we load a particle filter over the same map which localizes the simulated person. We add noise to our activity classification outputs and to the movement speed and turn readings. In this stage of the simulation, the green circle is the ground truth location, the blue circle is the best guess, and the purple circles are other hypotheses produced by the particle filter. The small yellow dots are the particles.
Source code for this project is available on GitHub!
For more information about this project, please see the full report available here.