SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning

Jianhong Wang, Yuan Zhang, Yunjie Gu, Tae-Kyun Kim

Research output: Contribution to conferencePaperpeer-review

Abstract

Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is on this https URL.
Original languageEnglish
Pages5941-5954
Number of pages14
Publication statusPublished - 2022
Event36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States
Duration: 10 Dec 202216 Dec 2022

Conference

Conference36th Conference on Neural Information Processing Systems, NeurIPS 2022
Country/TerritoryUnited States
Period10/12/2216/12/22

Keywords

  • Multi-agent reinforcement learning
  • Game Theory
  • multi-agent coordination
  • Multi-agent systems

Fingerprint

Dive into the research topics of 'SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning'. Together they form a unique fingerprint.

Cite this