Computer Vision Seminar|
Understanding Behavior through First Person Vision
Georgia Institute of Technology
Wednesday, February 24, 2016|
4:00pm - 5:30pm
Add to Google Calendar
About the Event
Recent progress in miniaturizing digital cameras and improving battery life has created a growing market for wearable cameras, exemplified by products such as GoPro and Google Glass. The imagery captured by these cameras is a unique video modality which implicitly encodes the attention, movement, and intentions of the user. The analysis of such video through First-Person Vision (FPV) provides new opportunities to model and analyze human behavior, create personalized records of visual experiences, and improve our understanding and treatment of a broad range of mental and physical health conditions. This talk will describe current research progress in First Person Vision and its applications to healthcare. We will present new results for automatically segmenting video objects, a core problem in computer vision which constitutes a building block for FPV. Our method is based on a novel unsupervised method for generating object proposals which identifies and solves a long-standing problem in energy-based image segmentation. We will then characterize the features which are intrinsic to FPV and assess their utility for predicting attention, recognizing activities, and analyzing social interactions. We will then describe applications of this technology in the treatment of autism and chronic health conditions. In the autism domain, we will demonstrate that FPV can provide objective and automatic measurements of social behavior, and give an example of assessing response to treatment in children with autism who are receiving behavioral interventions. I will end by showing some unpublished results in a few related areas: unsupervised deep learning of contour models from video and autonomous high-speed driving. This is joint work with Drs. Agata Rozga, Maithilee Kunda, Fuxin Li, and Alireza Fathi, and Ph.D. students Yin Li, Ahmad Humayan, Zhefan Ye, and Alicia Bargar.
James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is Director of the Center for Behavioral Imaging and co-Director of the Computational Perception Lab (CPL). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, and Face and Gesture 2015, and a 2013 Method of the Year Award from the journal Nature Methods. Dr. Rehg serves on the Editorial Board of the Intl. J. of Computer Vision, and he served as the Program co-Chair for ACCV 2012 and General co-Chair for CVPR 2009, and will serve as Program co-Chair for CVPR 2017. He has authored more than 100 peer-reviewed scientific papers and holds 25 issued US patents. His research interests include computer vision, machine learning, robot perception and mobile health. Dr. Rehg was the lead PI on an NSF Expedition to develop the science and technology of Behavioral Imaging, the measurement and analysis of social and communicative behavior using multi-modal sensing, with applications to developmental disorders such as autism. He is currently the Deputy Director of the NIH Center of Excellence on Mobile Sensor Data-to-Knowledge (MD2K), which is developing novel on-body sensing and predictive analytics for improving health outcomes. See www.cbs.gatech.edu and md2k.org for details.
Contact: Judi Jones
Faculty Sponsor: Jason Corso
Open to: Public
Video: *Seminar has NOT been canceled due to inclement weather.