An Information Theoretic Framework for Camera and Lidar Sensor Data Fusion and its Applications in Autonomous Navigation of Vehicles
Friday, December 20, 2013|
2:00pm - 4:00pm
Add to Google Calendar
About the Event
Abstract: This thesis develops an information theoretic framework for multi-modal sensor data fusion for robust autonomous navigation of vehicles. In particular we focus on the registration of 3D lidar and camera data, which are commonly used perception sensors in mobile robotics. This thesis presents a framework that allows the fusion of the two modalities, and uses this fused information to enhance state-of-the-art registration algorithms used in robotics applications. It is important to note that the time-aligned discrete signals (3D points and their reflectivity from lidar, and pixel location and color from camera) are generated by sampling the same physical scene, but in a different manner. Thus, although these signals look quite different at a high level (2D image from a camera looks entirely different than a 3D point cloud of the same scene from a lidar), since they are generated from the same physical scene, they are statistically dependent upon each other at the signal level. This thesis exploits this statistical dependence in an information theoretic framework to solve some of the common problems encountered in autonomous navigation tasks such as sensor calibration, scan registration and place recognition. In a general sense we consider these perception sensors as a source of information (i.e., sensor data), and the statistical dependence of this information (obtained from different modalities) is used to solve problems related to multi-modal sensor data registration.
Sponsor(s): Profs. Ryan Eustice & Silvio Savarese
Open to: Public