Team members: Haewon Jung, zora07@kaist.ac.kr

if you have interest about the project or have any questions before applying, feel free to reach out via email anytime!!🤗


Why Is It Difficult for Robots to Move and Manipulate Simultaneously?

https://www.youtube.com/shorts/NpDddhPN8cM

Imagine you’re serving a tray of foods at a restaurant. To place it on a low table, you should walk and start to crouch — but as you move, the tray tilts, and the foods almost drop. You quickly adjust your arms to stabilize it.

Humans do this effortlessly — balancing, compensating, and coordinating the entire body in one smooth motion for achieving different objectives at the same time (leg moving and hand stabilizing).

<aside> 💡

But for humanoid robots, this is an enormous challenge.

Why? Because the robot’s upper and lower bodies have different goals — the arms must maintain precise manipulation, while the legs must move and balance dynamically (standing, kneeling, or walking).

A single control policy that tries to handle both often gets confused: movement for balance can disturb manipulation, and vice versa.

</aside>

In real-world tasks that require both — like carrying objects while walking, or wiping a tall window — this conflict between two different tasks — such as moving and manipulating — prevents robots from performing human-like activities that require both at once.

Our goal is to teach humanoid robots to achieve these conflicting objectives between different body parts, so that leg motion naturally supports arm precision, enabling smooth, whole-body behaviors that feel more human-like.


Demonstration Tasks

Dynamic Surface Cleaning

창문청소.mp4

Dynamic Food Serving

https://youtu.be/NJmsiyyq8tg


Challenge of the Task

Humanoid work is not just consist of a single task — it’s multiple objectives that interact. The each body part has a different goal: the arms focus on precise manipulation, while the legs focus on movement and balance. And these goals constantly influence each other during motion.

Our research aims to go beyond this limitation, enabling multiple agents (different body parts) to learn how and when to support each other naturally, without extensive reward engineering.


Research Objective

We seek to develop a multi-agent reinforcement learning framework that trains different body parts control policy that can coordinates other body part’ objectives under dynamic postural and contact constraints.