Adaptation and Learning in Wireless Autonomous Systems

Effective communication is required for teams of robots to solve sophisticated collaborative tasks. In practice it is typical for both the encoding and semantics of communication to be manually defined by an expert; this is true regardless of whether the behaviors themselves are bespoke, optimization based, or learned. In a recent paper that will be published at the International Conference of Robotics and Automation on May 22 in Montreal, DCIST researchers present an agent architecture and training methodology using neural networks to learn task-oriented communication semantics based on a centralized policy that is not informed by communication constraints. A perimeter defense game illustrates the system’s ability to handle dynamically changing numbers of agents and its graceful degradation in performance as communication constraints are tightened or the expert’s observability assumptions are broken.

Highlight Video: https://youtu.be/bRppMwGSoWk


Source
:
https://arxiv.org/abs/1901.08490

Reference: James Paulos, Steven W. Chen, Daigo Shishika, and Vijay Kumar, “Decentralization of Multiagent Policies by Learning What to Communicate,” to appear at the 2019 IEEE International Conference on Robotics and Automation (ICRA 2019).

Points of ContactVijay Kumar (PI), James Paulos, and Daigo Shishika