Abstract

In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10 × 10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units.

Keywords

Sigmoid functionArtificial neural networkReinforcement learningReinforcementComputer scienceArtificial intelligenceFunction approximationFunction (biology)MathematicsPsychology

MeSH Terms

Deep LearningNeural NetworksComputer

Affiliated Institutions

Related Publications

Network In Network

Abstract: We propose a novel deep network structure called In Network (NIN) to enhance model discriminability for local patches within the receptive field. The conventional con...

2014 arXiv (Cornell University) 1037 citations

Publication Info

Year
2018
Type
article
Volume
107
Pages
3-11
Citations
1643
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1643
OpenAlex
85
Influential
1440
CrossRef

Cite This

Stefan Elfwing, Eiji Uchibe, Kenji Doya (2018). Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks , 107 , 3-11. https://doi.org/10.1016/j.neunet.2017.12.012

Identifiers

DOI
10.1016/j.neunet.2017.12.012
PMID
29395652
arXiv
1702.03118

Data Quality

Data completeness: 93%