For decades, the speed of processors was growing exponentially,
but this has abruptly stopped. Instead, now the
number of processor cores on a chip is growing exponentially. A
graphics processing unit (GPU) in a laptop may have 100 cores,
and supercomputers may have 1,000,000. At Michigan, we are
developing algorithms and data structures that use parallelism
to help solve large problems such as climate modeling and the
design of ethical clinical trials.
Abstract models of parallelism are also investigated, such as having a large number of small entities (cores on a chip, smart dust, ants, robots) working together on the same problem. Algorithms may need to take physical location into account, where communicating with entities far away takes more time and energy. We also study abstract models of distributed computation, where a large number of independent, unsynchronized computers are arranged in a (possibly unknown) network and must solve a problem only through local communication.
Stout, Quentin F.
Related Labs, Centers, and Groups|
Software Systems Laboratory