Recent multi-agent reinforcement learning (MARL) approaches have enabled incredible successes throughout artificial intelligence (e.g., surpassing humans at Chess, Go, or Starcraft). However, many approaches to date do not consider explicit communication between agents, in favor or fully decentralized collaboration, rather than true cooperation (joint action taking by several agents). For many robotics problems, such as collaborative construction or path planning, these communication-free approaches have exhibited very good results, allowing scalability to large teams (up to thousands of agents), but at the cost of suboptimal performances. On the other hand, approaches that explicitly consider communication between agents have shown limited scalability capabilities (i.e., limited to few agents), drastically impacting their applicability to real-life robotics applications.
This project aims at investigating and contributing to the state-of-the-art in the field of communication learning, where multiple agents both learn a policy that dictates their actions, as well as a language with which they can communicate and influence each other’s’ actions to achieve true cooperation. This project will investigate the development and use of these approaches to a variety of multi-agent cooperation problems, in particular cases where agents cannot complete the task at hand without explicit sharing of information among them.
Related recent publications: