About the Event
A central challenge in scaling up multiagent systems is to design computational techniques that an agent can use to make coordinated decisions despite its limited local awareness of the situations being faced by, and decisions being made by, all the other agents. One way to address this challenge for cooperative agents is for an agent to abstractly model others, and itself, in terms of behaving predictably to fulfill complementary long-term commitments and responsibilities. Assuming that all agents will act predictably can often improve joint performance and simplify agent decision making. Unfortunately, however, actually meeting predictions can sometimes be problematic in uncertain environments, where an agent might be forced from achieving intended outcomes, or might discover hidden costs that deter it from wanting to pursue an intended plan.
This raises two fundamental questions that will be the focus of this talk: First, what latitude does an agent have to make local adjustments to its action choices while still faithfully pursuing its commitments and responsibilities to others? Second, how should the answer to the first question impact decisions about defining commitments and responsibilities that achieve the benefits of predictability without harming collective performance because individuals cannot respond flexibly enough to evolving circumstances? I will describe some of our current answers to these questions, where specific commitments and broader organizational responsibilities are expressed and reasoned about following decision-theoretic principles.