Embodied AI uses Vision-Language-Action models to connect perception, language, and action, aiming to build general robots that can adapt, reason, and perform complex tasks in real-world environments.
Distributed reinforcement learning for scalable, decentralized multi-agent path finding in highly-structured environments (e.g., Amazon fulfillment centers)
Communication learning for simultaneous communication and action policy learning.
Deployment of robots to conduct efficient exploration and search in complex environments
Multi-robot search and monitor an area to locate potentially evasive targets, by learning strategies in a mixed cooperative-competitive environment.
Decentralised multi-robot exploration in communication-constraint environments, ensuring appropriate connectivity and adaptability in real-world conditions.
Distributed RL for junction-level traffic light phase control, as well as for decentralized CAVs control via communication learning.
Extension of Model-based Skills Learning to the Multi-Agent Reinforcement Learning Setting.
The project’s aim is to exploit this manipulative prowess in order to boost the performance of legged robots in both industrial and real-world situations.