

The number of agents can be an effective curriculum variable for controlling the difficulty of multi-agent reinforcement learning (MARL) tasks. Existing work typically uses manually defined curricula such as linear schemes. We identify two potential flaws while applying existing reward-based automatic curriculum learning methods in MARL: (1) The expected episode return used to measure task difficulty has high variance; (2) Credit assignment difficulty can be exacerbated in tasks where increasing the number of agents yields higher returns which is common in many MARL tasks. To address these issues, we propose to control the curriculum by using a TD-error based learning progress measure and by letting the curriculum proceed from an initial context distribution to the final task specific one. Since our approach maintains a distribution over the number of agents and measures learning progress rather than absolute performance, which often increases with the number of agents, we alleviate problem (2). Moreover, the learning progress measure naturally alleviates problem (1) by aggregating returns. In three challenging sparse-reward MARL benchmarks, our approach outperforms state-of-the-art baselines.
In the MPE Simple-Spread task, agents (blue circles) need to cover as many landmarks (red circles) as possible. With the number of landmarks fixed, 20 agents shown on the right can easily complete the task and achieve higher returns compared to 8 agents on the left. However, a higher number of agents exacerbates the credit assignment problem in policy learning.
Comparison on the Simple-Spread task, where the target is set with 8 agents and 8 landmarks. The plots are averaged over 5 random seeds and the shadow area denotes the 95% confidence intervals. The left figure shows the evaluation returns on the target task with 8 agents. Note that the x-axis represents the samples collected from the environment, which is proportional to the number of agents. The middle figure presents the generated curriculum from different methods, where SPMARL and SPRLM first generate more agents and then converge to the target 8 agents, while ALPGMM and VACL always generate more agents. The right figure shows the episode returns on the training tasks. The ALPGMM algorithm achieves the highest because it samples tasks with more than 14 agents.
Comparison on the 20-player XOR game where each agent needs to output different actions to succeed. While the linear curriculum from few to more (linear) and alpgmm successfully achieve optima eventually, SPRLM and SPMARL demonstrate a faster convergence.
Comparison on SMACv2 Protoss tasks. From top to bottom row, the tasks are Terran 5 vs. 5, Terran 6 vs. 6, Zerg 5 vs. 5, and Zerg 6 vs. 6. Across all four tasks, SPMARL achieves performance that is comparable to or better than all baseline methods.
Ablation of VLB on SMACv2 Protoss tasks. From top to bottom, the tasks are Protoss 5 vs. 5 and Protoss 6 vs. 6. The results indicate that SPMARL performs robustly across a broad range of VLB.
@inproceedings{zhao2025learning,
title={Learning Progress Driven Multi-Agent Curriculum},
author={Zhao, Wenshuai and Li, Zhiyuan and Pajarinen, Joni},
booktitle={Proceedings of the International Conference on Machine Learning},
year={2025}
}