Mechanism Design for Control-theoretic Objectives
Primary Investigator:
Vijay Gupta
Vijay Gupta, Mostafa M. Shibl
Abstract
The creation of local control laws for each individual agent in a multiagent system is crucial to ensuring that the emergent global behavior is desired in relation to a specific system level aim. Specifically, we derive a methodology for creating local agent objective functions that ensures that the resulting game has an inherent structure that can be leveraged in distributed learning applications, such as Markov potential games, and that there is an equivalence between the optimizers of the system level objective and the resulting equilibria. This allows any distributed learning approach that ensures convergence to an equilibrium for the obtained game structure to be used to finish the control design. Thus, the problem aims to leverage multi-agent reinforcement learning algorithms such as policy gradient algorithms to control dynamical systems through game theoretic approaches.