Faculty Candidate Seminar|
Overcoming the Deep Learning Power Wall with Principled Unsafe Optimization
Monday, February 18, 2019|
10:30am - 11:30am
Add to Google Calendar
About the Event
The computing industry has a power problem: the days of ideal power-process scaling are over, and chips now have more devices than can be fully powered simultaneously, limiting performance. To continue scaling performance in light of these power-constraints requires creative solutions. Specialized hardware accelerators are one viable solution. While accelerators promise to provide orders of magnitude more performance per watt, several challenges have limited their wide-scale adoption and fueled skepticism.
Brandon Reagen is a computer architect with a focus on specialized hardware (i.e., accelerators) and low-power design with applications in deep learning. He received his PhD from Harvard in May of 2018. Over the course of his PhD, Brandon made several research contributions to lower the barrier of using accelerators as general architectural constructs including benchmarking, simulation infrastructure, and SoC design. Using his knowledge of accelerator design, he led the way in highly-efficient and accurate deep learning accelerator design with his work on principled unsafe optimizations. In his thesis, he found that for DNN inference intricate, full-stack co-design between the robust nature of the algorithm and the circuits they execute on can result in nearly an order of magnitude more power-efficiency compared to standard ASIC design practices. His work has been published in conferences ranging from architecture, ML, CAD, and circuits. Brandon is now a Research Scientist at Facebook in the AI Infrastructure team.
Open to: Public