Abstract

We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The system is composed of multiple modules, each of which consists of a state prediction model and a reinforcement learning controller. The “responsibility signal,” which is given by the softmax function of the prediction errors, is used to weight the outputs of multiple modules, as well as to gate the learning of the prediction models and the reinforcement learning controllers. We formulate MMRL for both discrete-time, finite-state case and continuous-time, continuous-state case. The performance of MMRL was demonstrated for discrete case in a nonstationary hunting task in a grid world and for continuous case in a nonlinear, nonstationary control task of swinging up a pendulum with variable physical parameters.

Keywords

Softmax functionReinforcement learningComputer scienceState spaceTask (project management)Modular designNonlinear systemController (irrigation)Artificial intelligenceControl theory (sociology)Artificial neural networkControl (management)MathematicsEngineering

Affiliated Institutions

Related Publications

Publication Info

Year
2002
Type
article
Volume
14
Issue
6
Pages
1347-1369
Citations
474
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

474
OpenAlex

Cite This

Kenji Doya, Kazuyuki Samejima, Ken-ichi Katagiri et al. (2002). Multiple Model-Based Reinforcement Learning. Neural Computation , 14 (6) , 1347-1369. https://doi.org/10.1162/089976602753712972

Identifiers

DOI
10.1162/089976602753712972