Principal Investigator: Vaneet Aggarwal
Covering option discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios with sparse reward signals, through connecting the most distant states in the embedding space provided by the Fiedler vector of the state transition graph. However, these option discovery methods cannot be directly extended to multi-agent scenarios, since the joint state space grows exponentially with the number of agents in the system. In order to alleviate this problem, we design efficient approaches to make multi-agent deep covering options scalable.
The proposed multi-agent exploration approaches can be used for learning how multiple robots can pick up the object together, coordinate to move across doors, without explosion in complexity. Scalable algorithms are provided.
Jiayu Chen, Vaneet Aggarwal, and Tian Lan, "ODPP: A Unified Algorithm Framework for Unsupervised Option Discovery based on Determinantal Point Process," Dec 2022.
Jiayu Chen, Jingdi Chen, Tian Lan, and Vaneet Aggarwal, "Scalable Multi-agent Covering Option Discovery based on Kronecker Graphs," in Proc. Neurips, Dec 2022.
Jiayu Chen, Jingdi Chen, Tian Lan, and Vaneet Aggarwal, "Multi-agent Covering Option Discovery through Kronecker Product of Factor Graphs," in Proc. AAMAS, May 2022
Jiayu Chen, Tian Lan, Vaneet Aggarwal, "Hierarchical Adversarial Inverse Reinforcement Learning for Robotic Manipulation," in Proc. IEEE International Conference on Robotics and Automation (ICRA), May 2023.
Jiayu Chen, Jingdi Chen, Tian Lan, and Vaneet Aggarwal, "Multi-agent Covering Option Discovery based on Kronecker Product of Factor Graphs," Accepted IEEE TAI, 2022.
Jiayu Chen, Dipesh Tamboli, Tian Lan, and Vaneet Aggarwal, "Multi-task Hierarchical Adversarial Inverse Reinforcement Learning," in Proc. ICML, Jul 2023.