Learning dynamics in games with stochastic perturbations

Kaniovski, Y.M. & Young, H.P. (1995). Learning dynamics in games with stochastic perturbations. Games and Economic Behavior 11 (2) 330-363. 10.1006/game.1995.1054.

Full text not available from this repository.

Abstract

Consider a generalization of fictitious play in which agents′ choices are perturbed by incomplete information about what the other side has done, variability in their payoffs, and unexplained trembles. These perturbed best reply dynamics define a nonstationary Markov process on an infinite state space. It is shown, using results from stochastic approximation theory, that for 2 × 2 games it converges almost surely to a point that lies close to a stable Nash equilibrium, whether pure or mixed. This generalizes a result of Fudenherg and Kreps, who demonstrate convergence when the game has a unique mixed equilibrium.

Item Type: Article
Research Programs: Technological and Economic Dynamics (TED)
Bibliographic Reference: Games and Economic Behavior; 11:330-363 [1995]
Depositing User: IIASA Import
Date Deposited: 15 Jan 2016 02:05
Last Modified: 27 Aug 2021 17:15
URI: https://pure.iiasa.ac.at/4243

Actions (login required)

View Item View Item