Hiwot Kassa, second-year PhD student in CSE, has been named an inaugural Microsoft Research Ada Lovelace Fellow. This award recognizes great accomplishments and Microsoft’s confidence in her as a future leader. In addition to tuition and cash awards, the fellowship offers Kassa a chance for a research internship and an invitation to the Microsoft Research PhD Summit, a two-day workshop at their Redmond lab where fellows will meet with Microsoft researchers and other top students to share their research.
Kassa’s current research project, advised by Arthur F. Thurnau Professor Valeria Bertacco, is called “Efficient application mapping to heterogeneous systems.” Her goal is to improve the speed and efficiency of high-performance applications, such as those that rely on large data centers, by coordinating a variety of different computing components to tackle each problem’s specific computational needs.
To accomplish this, Kassa is developing a framework that will look at the computations, communication patterns, and data access patterns of an application and decide in real time which components will best handle it. This is a sharp turn from her previous work, which was focused on the popular area of application-specific architectures designed to handle one type of problem as efficiently as possible.
“Right now because of the slowing down of Moore’s Law we’re building lots of application-specific architectures,” Kassa says. Designers in this area tackle one application after another and design an architecture specifically to suit its needs. But this approach can quickly see diminishing returns.
“While working on that I realized it was too specific,” she says. “It depends not only on the application but the size of the data, the type of data you have, and several other very-specific aspects – it’s not going to scale. As a researcher, I’m more interested in finding a unified solution.”
The finished product will operate as a layer on top of the computing components, whose strengths and weaknesses are already well-known. These components, including graphical processing units (GPUs), field-programmable gate arrays (FPGAs), and various kinds of hardware accelerators, are already in widespread use in large data centers. This framework would serve as the missing piece to coordinate all those components to tackle the different parts of an application they’re best at.
“Software developers only want to worry about the applications they’re building,” Kassa says. Ideally, this framework will give developers greater efficiency and power without having to alter their work. What matters to the tools is the type of computation that needs to be done at a certain point in the program. Instead of being application-specific, the components are treated as computation-specific.
In addition to automating this process, the project will look at which types of hardware accelerators should be installed in data centers to maximize efficiency. Right now, research focuses on the needs of applications. But breaking these down into their individual computations and coordinating different parts to handle them would change the needs of the data center. The addition of certain accelerators can take the pressure off the other components.
“What if we just need very small, efficient hardware components instead of building for all these applications different data systems?” Kassa asked. “I’m looking at different domains to show that we don’t need those application-specific solutions – we just need computation-specific units working together.”
Kassa studied for her undergrad in Electrical and Computer Engineering at Addis Ababa Institute of Technology. At Michigan, she got the chance to return to her alma mater as a participant in U-M’s collaborative research program at AAiT. Kassa traveled to AAiT’s campus in 2018 to help develop research projects with their grad and undergrad computer engineering students. While there, she mentored participating students and continues to do so remotely.
Posted January 17, 2019