Multiagent Systems

Research Areas -> Artificial Intelligence -> Multiagent Systems
 
Overview
Computational agents typically will need to interact with other entities - whether humans, other artificial agents, or both - that share its environment. Agents need to be able to reason about how their decisions might impact and be impacted by the decisions of other agents, in order to successfully cooperate, compete, or simply coexist with them. At Michigan, we study a number of different aspects of agent reasoning in multiagent systems, including 1) how such agents may coordinate intelligently and improve individual and joint performance through proactive selection of physical, communicative, and/or computational actions, 2) how such agents may learn through experience to act and interact near-optimally, and 3) how such agents should behave strategically to further their own selfish interests in multiagent settings.
 
Faculty
Baveja, Satinder Singh
Durfee, Edmund H
Wellman, Michael


Related Labs, Centers, and Groups
Distributed Intelligent Agents Group