ECE
ECE
ECE ECE


Defense Event

Extracting compact knowledge from massive data

Dejiao Zhang


 
Monday, May 06, 2019
1:00pm - 3:00pm
EECS 1005

Add to Google Calendar

About the Event

Abstract: Over the past couple decades, we have witnessed a huge explosion in data generation from almost every perspective of our lives. Along with such huge volumes of data come more complex models, e.g., deep neural networks (DNNs). This increase in complexity demands new trends in both modeling and analysis of data, among which low dimensionality and sparsity lie at the core. In this thesis, we follow this avenue to address three different problems in machine learning. Our motivations are: 1) Don’t solve a harder problem than you have to. The low-dimensional subspace model is popular because of its efficiency, ease of analysis, and better interpretability. We focus on subspace estimation from streaming data with emphasis on data being undersampled. 2) Less is more. DNNs are often over-parameterized with many connections being redundant. Successfully removing these connections can improve both efficiency and generalization. We present a new method for compressing DNNs, which encourages sparsity while simultaneously tying together weights that correspond to strongly correlated neurons. 3) Build an explanatory model of data. A real intelligent agent should be able to, without a teacher, identify and disentangle the underlying explanatory factors of data. We present an information-theoretic approach for jointly uncovering the underlying categories of data, and separating the continuous representation of data into statistically independent parts with each encoding a specific variation in data.

Additional Information

Sponsor(s): Professor Laura Balzano

Open to: Public