About the Event
When solving large optimization or continuous-valued inference problems, it is often assumed that noise can be well-modeled as a Gaussian distribution. This assumption leads to very fast algorithms, but does not reflect the typical reality. From the inability of a mapping system to cope with perceptual ambiguity (such as an incorrectly "recognized" place) to the inability of a camera calibration system to cope with "bad" calibration images, non-Gaussian errors pose a fundamental challenge in making real-world systems work.
In this talk, I will describe our recent work in "max" mixtures, a probabilistic mixture model formulation that allows more realistic error models to be incorporated into an inference problem. Unlike the more conventional "sum" mixtures, we show that this mixture formulation permits very fast inference. We will present results from the mapping domain, showing how max mixtures can be used to overcome perceptual aliasing in a principled Bayesian method. We will also describe our robust optimization approach as applied to camera calibration, as well as several other useful improvements over standard calibration methods including automatic model selection and active learning.