(click to copy)

Publication

Lévy noise promotes cooperation in the prisoner’s dilemma game with reinforcement learning

Uncertainties are ubiquitous in everyday life, and it is thus important to explore their effects on the evolution of cooperation. In this paper, the prisoner’s dilemma game with reinforcement learning subject to Lévy noise is studied. Specifically, diverse fluctuations mimicked by Lévy distributed noise are reflected in the payoff matrix of each player. At the same time, the self-regarding Q-learning algorithm is considered as the strategy update rule to learn the behavior that achieves the highest payoff.

The results show that not only does Lévy noise promote the evolution of cooperation with reinforcement learning, it does so comparatively better than Gaussian noise. We explain this with the iterative updating pattern of the self-regarding Q-learning algorithm, which has an accumulative effect on the noise entering the payoff matrix. It turns out that under Lévy noise, the Q-value of cooperative behavior becomes significantly larger than that of defective behavior when the current strategy is defection, which ultimately leads to the prevalence of cooperation, while this is absent with Gaussian noise or without noise.

This research thus unveils a particular positive role of Lévy noise in the evolutionary dynamics of social dilemmas.

L. Wang, D. Jia, L. Zhang, P. Zhu, M. Perc, L. Shi, Z. Wang, Lévy noise promotes cooperation in the prisoner’s dilemma game with reinforcement learning, Nonlinear Dynamics 108(2) (2022) 1837-1845.

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*