I'm a PhD student in Electrical Engineering at the Aalto Robot Learning
Lab, Finland.
My research focuses on reinforcement learning, imitation learning, and their applications in robotics
and general decision-making.
I am also interested in robot perception, including 3D scene understanding and physics-based dynamics
modeling.
I have done several projects on imitation learning, mutli-agent reinforcment learning, curriculum
learning and model-based reinforcement learning.
Representative papers are highlighted. Please check my Google
Scholar for more details.
In order to overcome the relative overgeneralization problem in multi agent learning, we propose to
enable optimism in multi-agent policy gradient methods by reshaping advantages.
We propose multi-agent correlated policy factorization under CTDE, in order to overcome the
asymmetric learning failure when naively distill individual policies from a joint policy.
We show two flaws in existing reward based curriculum learning algorithms when generating number of
agents as curriculum in MARL.
Instead, we propose a learning progress metric as a new optimization objective which generates
curriculum maximizing the learning progress of agents.
We propose a bi-level optimization framework to address the issue of physically infeasible motion
data in humanoid imitation learning.
The method alternates between optimizing the robot's policy and modifying the reference motions,
while using a latent space regularization to preserve the original motion patterns.
We show that in many multi agent systems where agents are weakly coupled, partial observation can
still enable near-optimal
decision making. Moreover, in a mobile robot manipulator, we show partial observation of agents can
improve robustness to agent failure.
We propose a simple but effective model-based reinforcement learning algorithm relying only on a
latent dynamics model trained
by latent temporal consistency.