About the Event
Current systems that learn to process natural language require laboriously constructed human-annotated training data. Ideally, a computer would be able to acquire language like a child by being exposed to linguistic input in the context of a relevant but ambiguous perceptual environment. As a step in this direction, we will present systems that learn to sportscast simulated robot soccer games and to follow navigation instructions in virtual environments by simply observing sample human linguistic behavior in context. This work builds on our earlier work on supervised learning of semantic parsers that map natural language to a formal meaning representation. In order to apply such methods to learning from observation, we have developed methods that estimate the meaning of sentences from ambiguous perceptual context.
Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 150 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, was program co-chair for the 2006 AAAI Conference on Artificial Intelligence, general chair of the 2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, and co-chair of the 1990 International Conference on Machine Learning. He is a Fellow of both the American Association for Artificial Intelligence and the Association for Computing Machinery, and the recipient of best paper awards from the National Conference on Artificial Intelligence, the SIGKDD International Conference on Knowledge Discovery and Data Mining, the International Conference on Machine Learning, and the Annual Meeting of the Association for Computational Linguistics. His recent research has focused on learning for natural-language processing, connecting language and perception, statistical relational learning, and transfer learning.