(click to copy)

Publication

Reinforcement learning facilitates an optimal interaction intensity for cooperation

Our social interactions vary over time and they depend on various factors that determine our preferences and goals, both in personal and professional terms. Researches have shown that this plays an important role in promoting cooperation and prosocial behavior in general. Indeed, it is natural to assume that ties among cooperators would become stronger over time, while ties with defectors (non-cooperators) would eventually be severed.

Here we introduce reinforcement learning as a determinant of adaptive interaction intensity in social dilemmas and study how this translates into the structure of the social network and its propensity to sustain cooperation.

We merge the iterated prisoner’s dilemma game with the Bush-Mostelle reinforcement learning model and show that there exists a moderate switching dynamics of the interaction intensity that is optimal for the evolution of cooperation.

Besides, the results of Monte Carlo simulations are further supported by the calculations of dynamical pair approximation. These observations show that reinforcement learning is sufficient for the emergence of optimal social interaction patterns that facilitate cooperation.

This in turn supports the social capital hypothesis with a minimal set of assumptions that guide the self-organization of our social fabric.

Z. Song, H. Guo, D. Jia, M. Perc, X. Li, Z. Wang, Reinforcement learning facilitates an optimal interaction intensity for cooperation, Neurocomputing 513 (2022) 104-113

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*