Psychological studies have shown that human intelligence does not require high-resolution images to ascertain information about the environment for basic navigation. The "selective degradation hypothesis'', developed by Owens and Leibowitz says that some visual abilities such as vehicle steering and speed control remain relatively easy despite loss in visual acuity and color vision. Motivated by this idea, we have developed a system that uses only low-resolution (32 x 24) grayscale images to answer the challenges of autonomous corridor exploration and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors.
The basic navigation is represented in the below diagram. The robot basically has three modes of operation, the centering mode, the homing mode and the turning at the end of the corridor.
|
Sample map of the basement |
Sample map of Riggs floor 1 |
|
|
Sample map of Riggs floor 3 |