×

Computing approximate Nash equilibria and robust best-responses using sampling. (English) Zbl 1235.91037

Summary: This article discusses two contributions to decision-making in complex partially observable stochastic games. First, we apply two state-of-the-art search techniques that use Monte-Carlo sampling to the task of approximating a Nash-equilibrium (NE) in such games, namely Monte-Carlo tree search (MCTS) and Monte-Carlo counterfactual regret minimization (MCCFR). MCTS has been proven to approximate a NE in perfect-information games. We show that the algorithm quickly finds a reasonably strong strategy (but not a NE) in a complex imperfect information game, i.e. Poker. MCCFR on the other hand has theoretical NE convergence guarantees in such a game. We apply MCCFR for the first time in Poker. Based on our experiments, we may conclude that MCTS is a valid approach if one wants to learn reasonably strong strategies fast, whereas MCCFR is the better choice if the quality of the strategy is most important.
Our second contribution relates to the observation that a NE is not a best response against players that are not playing a NE. We present Monte-Carlo restricted Nash response (MCRNR), a sample-based algorithm for the computation of restricted Nash strategies. These are robust best-response strategies that (1) exploit non-NE opponents more than playing a NE and (2) are not (overly) exploitable by other strategies. We combine the advantages of two state-of-the-art algorithms, i.e. MCCFR and restricted Nash response (RNR). MCRNR samples only relevant parts of the game tree. We show that MCRNR learns quicker than standard RNR in smaller games. Also we show in Poker that MCRNR learns robust best-response strategies fast, and that these strategies exploit opponents more than playing a NE does.

MSC:

91A60 Probabilistic games; gambling
91A15 Stochastic games, stochastic differential games
65C05 Monte Carlo methods
PDFBibTeX XMLCite
Full Text: arXiv