Learning to signal: Analysis of a micro-level reinforcement model

Raffaele Argiento, R. Pemantle*, B. Skyrms, S. Volkov

*Autore corrispondente per questo lavoro

Risultato della ricerca: Contributo in rivistaArticolopeer review

53 Citazioni (Scopus)

Abstract

We consider the following signaling game. Nature plays first from the set {1, 2}. Player 1 (the Sender)\r\nsees this and plays from the set {A, B}. Player 2 (the Receiver) sees only Player 1’s play and plays from\r\nthe set {1, 2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told\r\nwhether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this\r\ngame is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two\r\ncolors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After\r\na win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of\r\nequilibria are possible for this game other than the optimal ones. However, we show that the urn scheme\r\nachieves asymptotically optimal coordination.
Lingua originaleInglese
pagine (da-a)373-390
Numero di pagine18
RivistaStochastic Processes and their Applications
Volume119
Numero di pubblicazioneNA
DOI
Stato di pubblicazionePubblicato - 2009

All Science Journal Classification (ASJC) codes

  • Statistica e Probabilità
  • Modellazione e Simulazione
  • Matematica Applicata

Keywords

  • : Urn model
  • Evolution
  • Probability
  • Stable
  • Stochastic approximation
  • Two-player game
  • Unstable
  • game

Fingerprint

Entra nei temi di ricerca di 'Learning to signal: Analysis of a micro-level reinforcement model'. Insieme formano una fingerprint unica.

Cita questo