Environmental Interactions by Autonomous Legged Robots

Environmental Interactions by Autonomous Legged Robots

1. Scalable Whole-Body Control for Legged Mobile Manipulation

The majority of recent works in legged robotics has focused on traversing challenging/complex environments, often motivated by passive exploration goals such as mapping, inspection, or localization tasks. As a result, legged robots still exhibit limited abilities to actively interact with their environment (e.g., turning a valve, picking up/placing down objects, or modifying their environment to improve traversability). However, these robots have a unique potential to manipulate objects in dynamic, human-centric, or otherwise inaccessible environments.

This project aims to cover the three main categories of prehensile (grasping-based) manipulation tasks by legged robots: 1) typical manipulation of smaller objects using one or two locomotive legs, 2) handling larger objects with a dedicated robotic arm mounted on the robot’s body, and 3) cooperative manipulation of items by multiple robots.

To achieve these objectives, we will initially explore conventional techniques, drawing from and expanding upon existing work in model-based control (e.g., MPC). Subsequently, leveraging our recent experience and expertise in distributed deep RL for robotic control, we will develop learning-based approaches to create scalable full-body controllers suitable for a wide variety of mobile manipulation tasks.

Related recent publications:

2. Learning-based Informative Footfall Planning

Learning-based Informative Footfall Planning

Autonomous legged robots are often tasked with navigating challenging terrains, such as forest areas, rocky slopes, or dry/underwater sandy surfaces, where one of the key challenges lies in enabling the robot to select and leverage efficient/safe ground contacts (footfalls). While existing footfall planning methods often search for contacts that can sustain locomotion, we note that interactions between the robot and its environment may also provide crucial information about the robot’s surroundings.

In particular, this project focuses on applications where a legged robot is tasked with locomoting over malleable ground with variable stiffness, where sustained locomotion is not the main concern. Instead, areas of higher stiffness may indicate the presence of objects of interest buried underground (e.g., unexploded ordnance, valuable ore, etc.). In this context, we propose relying on force/torque feedback from foot/leg sensors to construct a stiffness map of the terrain and use an informative footfall planning strategy based on Multi-Agent Reinforcement Learning (MARL).

Related recent publications:


Assistant Professor
Junkai LU
Junkai LU
Praveen ELANGO P
Praveen ELANGO
Mukund BALA M
Mukund BALA
Yaswanth GONNA Y
Yaswanth GONNA
Assistant Professor, CWRU, USA