Abstract
Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is on this https URL.
Original language | English |
---|---|
Pages | 5941-5954 |
Number of pages | 14 |
Publication status | Published - 2022 |
Event | 36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States Duration: 10 Dec 2022 → 16 Dec 2022 |
Conference
Conference | 36th Conference on Neural Information Processing Systems, NeurIPS 2022 |
---|---|
Country/Territory | United States |
Period | 10/12/22 → 16/12/22 |
Keywords
- Multi-agent reinforcement learning
- Game Theory
- multi-agent coordination
- Multi-agent systems