Homepage
Technical Programme
Call for Papers
Call for Proposals
Workshops & Tutorials
Demos
Committees
Student Info
Attending AAMAS 2011
Sponsors
Awards
Misc
AAMAS 2012
About Taiwan
Conference Venue
On-line Registration

Photographs provided by
Tourism Bureau, Ministry of
Transportation and Communications

 

  Workshops
 


No. Tutorial Title Duration Date
T1 Res. Meth. 1/2 Day May 2 (AM)
T2 El. Negotiation Agent-Mediated Electronic Negotiation
La Poutre, Robu, Fatima, Ito
1/2 Day May 2 (PM)
T3 ProMAS tut Programming MAS (CANCELED)
Bordini, Dastani, Hindriks
1 Day May 2
T4 CoopMAS tut Cooperative Games in MAS
Chalkiadakis, Elkind, Wooldridge
1 Day May 2
T5

Dec. Making Decision Making in MAS
Doshi, Rabinovich, Amato, Spaan
1 Day May 2
T6 Sec. Games Security Games
Kiekintveld, Gatti, Jain
1 Day May 3
T7 MARL I Multi-Agent Reinforcement Learning I:
Algorithms and Analysis Methods

De Jong, Kaisers, Melo, Nowe, Tuyls
1/2 Day May 3 (AM)
T8 MARL II Multi-Agent Reinforcement Learning II:
Learning with and from Other Agents

De Jong, Kaisers, Melo, Nowe, Tuyls
1/2 Day May 3 (PM)
T9 Social Laws Social Laws for MAS
Ågotnes, Van der Hoek, Wooldridge
1 Day May 3

Tutorial Descriptions
Res. Meth.
This half day tutorial provides an introduction to the field of agents and multiagent systems and an in-depth discussion of its methodological foundations. Starting from an overview of the history and state of the art in the field, we will review main research methods, approaches to evaluating agents research, and provide guidance for planning, structuring, and conducting high- quality research pro jects so as to avoid methodological pitfalls and maximise impact. Moreover, the tutorial will provide space and time for method reflexion and debate on different approaches to agents research, and for “taking stock” of the state of the field.

El. Negotiation
This tutorial aims to give a broad overview of state of the art in agent-mediated negotiation. The tutorial will focus on the game-theoretic foundations of electronic negotiations. We review the main concepts from both cooperative and competitive bargaining theory, such as Pareto optimality, the Pareto-efficient frontier as well as utilitarian, Nash and Kalai-Smorodinsky (egalitarian) solution concepts. We discuss and compare games with complete and with incomplete information. Next, we exemplify these concepts through some well-known sequential bargaining games, such as the ultimatum game.

A particular emphasis will be placed on multi-issue (or multi-attribute) negotiation - a research area that has received significant attention in recent years from the multi-agent community. We discuss some of the challenges that arise in modeling negotiations over multiple issues, especially when no information (or only incomplete information) is available about the preferences of the negotiation partner(s), as well as some of the heuristics employed in AI and machine learning research to solve this problem. The second part of the tutorial focuses on multi-issue negotiations which may have realistic limitations like time-constraints, computational tractablility, private information issues, online negotiations, etc.

ProMAS tut
Multi-agent systems provide a design approach for developing systems that are able to operate in complex and dynamic environments. Recently many new exciting developments have emerged that facilitate the development multi-agent systems. New technologies for interacting with environments and for managing the organization of agents are now available to ease the design of complex systems. At the same time, agent programming language technology has matured and more sophisticated development environments are available for coding and debugging multi-agent systems. These technologies are now also being applied to build more challenging applications such as real-time games. The aim of this tutorial is to provide participants with a thorough understanding of these new technologies and developments and to providethem with basic skills to develop multi-agent systems themselves.

CoopMAS tut
Cooperative (or coalitional) games provide an expressive and flexible framework for modeling collaboration in multi-agent systems. However, from a computational perspective, cooperative games present a number of challenges, chief among them being how they can be succinctly represented and how to reason efficiently with such representations. In this tutorial, we survey work on several aspects of cooperative games and their applications to multi-agent systems. We assume a basic knowledge of AI principles (e.g., rule-based knowledge representation, very basic logic), but no knowledge of game theory or cooperative games. We introduce the basic models used in cooperative game theory, and the relevant solution concepts. We then describe the key computational issues surrounding such models, and survey the main approaches developed over the past decade for representing and reasoning about cooperative games in AI and computer science generally. We then discuss the aspects of cooperative games that are particularly important in multi-agent settings, such as uncertainty and decentralized coalition formation algorithms. We conclude by presenting recent applications of these ideas in multi-agent scenarios.

Dec. making
Choosing optimally among different lines of actions is a key aspect of autonomy in agents. The process by which an agent arrives at this choice is complex, particularly in environments shared with other agents. Drawing motivation, in part, from search and rescue applications in disaster management, the tutorial will span the range of multiagent interactions of increasing generality, and study a set of optimal and approximate solution techniques to time-extended decision making in both noncooperative and cooperative multiagent contexts. This self-contained tutorial will begin with the relevant portions of game theory and culminate with several advanced decision-theoretic models of agent interactions.

The tutorial is aimed at graduate students and researchers who want to enter this emerging field or to better understand recent results in this area and their implications on the design of multi-agent systems. Participants should have a basic knowledge of probability theory, and preferably, utility theory.

Sec. Games
Game theory is an increasingly important paradigm for modeling and decision-making in security domains, including homeland security resource allocation decisions, robot patrolling strategies, and computer network security. Several deployed real-world systems use game theory to randomize critical security decisions to prevent terrorist adversaries from exploiting a predictable security schedule. The ARMOR system deployed at the LAX airport and the IRIS system deployed by the Federal Air Marshals Service were first presented at the AAMAS conference.

This tutorial will introduce a wide variety of game-theoretic modeling techniques and algorithms that have been developed in recent years for security problems. Introductory material on game theory and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic security game framework, we will describe algorithms for scaling to very large games, methods for modeling uncertainty and attacker observation capabilities in security games, and applications of these techniques for randomized resource allocation and patrolling problems.  At the end we will highlight the many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.

MARL I+II
Participants will be taught the basics of single-agent reinforcement learning (RL) and the associated theoretical convergence guarantees, related to Markov Decision Processes (MDP). We will then outline how these guarantees are lost in a setting where multiple agents learn and introduce a framework, based on game theory and evolutionary game theory (EGT), that allows thorough analysis and prediction of the dynamics of multi-agent learning. We also discuss a fundamental question that designers of multi-agent learning algorithms are confronted with, i.e., what is it we want the agents to learn?
Fairness is shown to be an important consideration here, especially in case systems are designed to collaborate with human agents. Finally, the last part of the tutorial will focus on reward-free multi-agent scenarios, in which the agents learn a task by observing other agents perform it. We introduce several social learning mechanisms that have been gathering increasing attention and that may lead to different outcomes than individual RL.

The tutorial is offered in two half-day parts. Participants can register for each separate part (at the cost of a half-day tutorial), or for both parts (at the cost of a full-day tutorial).

Social laws
The tutorial gives an overview of the state-of-the-art in the use of social laws for coordinating multi-agent systems. It discusses questions such as: how can a social law that ensures some particular global behaviour be automatically constructed? If two social laws achieve the same objective, which one should we use? How can we construct a social law that works even if some agents do not comply? Which agents are most important for a social law to achieve its objective? It turns out that to answer questions like these, we can apply a suit of tools available from the interdisciplinary tool chest of multi-agent systems. The tutorial also gives instruction in research practices and methodology in multi-agent systems: what are key research questions of interest, and what are some of the most important methods employed in this interdisciplinary field?

 


 
AAMAS 2011 Secretariat

Elite Professional Conference Organizer
Mr. JUN Tsai / 4F., No.20, Ln.128, Jingye 1st Rd., Taipei City 104, Taiwan / Tel: +886-2-8502-7087 Ext. 28 / Fax: +886-2-8502-7025
E-mail: aamas2011@elitepco.com.tw