Entries by Lily Hoot

Asynchronous and Parallel Distributed Pose Graph Optimization

A recent paper by members of the DCIST alliance has received a 2020 honorable mention from IEEE Robotics and Automation Letters. The paper presents Asynchronous Stochastic Parallel Pose Graph Optimization (ASAPP), the first asynchronous algorithm for distributed pose graph optimization (PGO) in multi-robot simultaneous localization and mapping. By enabling robots to optimize their local trajectory estimates […]

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams

A recent paper by members of the DCIST alliance considers the problem of planning trajectories for a team of sensor-equipped robots to reduce uncertainty about a dynamical process. Optimizing the trade-off between information gain and energy cost (e.g., control effort, energy expenditure, distance travelled) is desirable but leads to a non-monotone objective function in the […]

Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping

A recent paper by members of the DCIST alliance develops an open-source C++ library for real-time metric- semantic visual-inertial Simultaneous Localization And Mapping (SLAM). The library goes beyond existing visual and visual-inertial SLAM libraries (e.g., ORB-SLAM, VINSMono, OKVIS, ROVIO) by enabling mesh reconstruction and semantic labeling in 3D. Kimera is designed with modularity in mind […]

Asymptotically Optimal Planning for Non-myopic Multi-Robot Information Gathering

A recent paper by members of the DCIST alliance develops a novel highly scalable sampling-based planning algorithm for multi-robot active information acquisition tasks in complex environments. Active information gathering scenarios include target localization and tracking, active Simultaneous Localization and Mapping (SLAM), surveillance, environmental monitoring and others. The goal is to compute control policies for mobile robot […]

Active Exploration in Signed Distance Fields

When performing tasks in unknown environments it is useful for a team of robots to have a good map of the area to assist in efficient, collision-free planning and navigation. A recent paper by members of the DCIST alliance tackles the problem of autonomous mapping of unknown environments using information theoretic metrics and signed distance […]

Learning Multi-Agent Policies from Observations

A recent paper from the DCIST team introduces a framework for learning to perform multi-robot missions by observing an expert system executing the same mission. The expert system is a team of robots equipped with a library of controllers, each designed to solve a specific task. The expert system’s policy selects the controller necessary to […]

Sim-to-(Multi)-Real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors

A recent paper by members of the DCIST alliance develops the use of reinforcement learning techniques to train policies in simulation that transfer remarkably well to multiple different physical quadrotors. Quadrotor stabilizing controllers often require careful, model-specific tuning for safe operation. The policies developed are low-level, i.e., they map the rotorcrafts’ state directly to the […]

Planning with Uncertain Specifications (PUnS)

Consider the task of setting a dinner table. It involves placing the appropriate serving utensils and silverware according to the dishes being served. Some of the objects need to be placed in a particular order as they might be stacked on top of each other or due to cultural traditions. Many real-world tasks demonstrate such […]

Synthesis of a Time-Varying Communication Network by Robot Teams with Information Propagation Guarantees

A recent paper by Xi Yu and M. Ani Hsieh from the University of Pennsylvania presents a distributed control and coordination strategy that allows a swarm of mobile robots to form an intermittently connected communication network as they monitor a given environment. The approach assumes robots are tasked to patrol a set of perimeters in […]

Learning Decentralized Controllers with Graph Neural Networks

A recent paper by members of the DCIST alliance develops a method for distributed control of large networks of mobile robots with interacting dynamics and sparsely available communications. Their approach is to learn local controllers that require only local information and communications at test time by imitating the policy of centralized controllers using global information […]