Autonomous Navigation And Mapping Using Monocular Low-Resolution Grayscale Vision

Vidya Murali and Stan Birchfield

Psychological studies have shown that human intelligence does not require high-resolution images to ascertain information about the environment for basic navigation. The "selective degradation hypothesis'', developed by Owens and Leibowitz says that some visual abilities such as vehicle steering and speed control remain relatively easy despite loss in visual acuity and color vision. Motivated by this idea, we have developed a system that uses only low-resolution (32 x 24) grayscale images to answer the challenges of autonomous corridor exploration and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors.

Algorithm Overview

The basic navigation is represented in the below diagram. The robot basically has three modes of operation, the centering mode, the homing mode and the turning at the end of the corridor.

Experimental Results

Navigation in the basement of Riggs
(1 min. 10 sec., 2.27 MB)

Sample map of the basement

Navigation in first floor of Riggs
(2 min. 3 sec., 2.85 MB)

Sample map of Riggs floor 1

Navigation in third floor of Riggs
(1 min. 45 sec., 1.86 MB)

Sample map of Riggs floor 3

Navigation with people in the corridor (50 sec., 0.6 MB)


This work has been integrated into a system to recover minimalistic corridor geometry.