CSE
CSE
CSE CSE


Defense Event

Addressing Memory Bottlenecks for Emerging Applications

Animesh Jain


 
Tuesday, April 10, 2018
1:30pm - 3:30pm
3725 Beyster Building

Add to Google Calendar

About the Event

There has been a recent emergence of applications, from the domains of machine learning, data mining, numerical analysis and image processing, that are becoming pervasive in both datacenter and mobile workloads. Due to their highly pervasive nature, computer engineers need to design systems that can run these applications efficiently. However, prior research shows that current hardware systems are incapable of matching the massive computational and storage demands of these applications. This dissertation studies the performance bottlenecks that arise when we try to improve the performance of these applications on current hardware systems. We observe that most of these applications are data-intensive, i.e., they operate on large datasets. Consequently, these applications put significant stress on the memory hierarchy. Interestingly, we observe that the bottleneck is not just limited to one memory structure. Instead, different applications and their use cases stress different levels of memory hierarchy. For example, training Deep Neural Networks (DNN), an emerging machine learning approach, is currently limited by the size of the GPU main memory. On the other hand, improving DNN inference on CPUs requires managing the bandwidth of the smallest memory structure - Physical Register File (PRF) - in an efficient manner. In this dissertation, we look at four such memory bottlenecks for these emerging applications that are spread across the memory hierarchy - off-chip memory, on-chip memory and physical register file. We present a systematic analysis of these memory bottlenecks and present hardware and software solutions to bridge the scalability gap of current hardware systems for these emerging applications.

Additional Information

Sponsor(s): Jason Mars and Lingjia Tang

Open to: Public