Abstract

The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.

Keywords

Reinforcement learningComputer scienceArtificial intelligenceAction (physics)Function (biology)Artificial neural networkFunction approximationQ-learningAdaptation (eye)Domain (mathematical analysis)AlgorithmMachine learningMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2016
Type
article
Volume
30
Issue
1
Pages
2094-2100
Citations
3514
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3514
OpenAlex

Cite This

Hado van Hasselt, Arthur Guez, David Silver (2016). Deep Reinforcement Learning with Double Q-Learning. Proceedings of the AAAI Conference on Artificial Intelligence , 30 (1) , 2094-2100. https://doi.org/10.1609/aaai.v30i1.10295

Identifiers

DOI
10.1609/aaai.v30i1.10295