About the Event
An indoor navigating robot must perceive its local environment in order to act. The robot must construct a concise and useful model of the environment from the stream of visual data that it acquires while traveling within it. Visual processing must be done on-line and efficiently to keep up with the robot’s need.
This thesis contributes both representations and algorithms toward solving the problem of scene understanding for an indoor navigating robot. Two representations, the Planar Semantic Model (PSM), and Action Opportunity Star (AOS), are proposed to capture important navigation information of the local indoor environment. PSM is a concise representation for the geometric structure of the indoor environments, and AOS is an abstraction of navigation opportunities at a given location which can be efficiently extracted from PSM. Both representations are capable of expressing incomplete knowledge where representations of unknown regions can be incrementally built as observations become available. An on-line generate-and-test framework is presented to incrementally and efficiently construct PSM from a stream of visual data. Our experimental results demonstrate that our framework is capable of modeling various structures of indoor environments and cluttered environments.