Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task
Date
2013Author
Sawatsubashi, Yoshito
Mohamad Faizal, Samsudin
Shibata, Katsunari
Metadata
Show full item recordAbstract
"Concept" is a kind of discrete and abstract state representation, and is considered useful for efficient action planning. However, it is supposed to emerge in our brain as a parallel processing and learning system through learning based on a variety of experiences, and so it is difficult to be developed by hand-coding. In this paper, as a previous step of the "concept formation", it is investigated whether the discrete and abstract state representation is formed or not through learning in a task with multi-step state transitions using Actor-Q learning method and a recurrent neural network. After learning, an agent repeated a sequence two times, in which it pushed a button to open a door and moved to the next room, and finally arrived at the third room to get a reward. In two hidden neurons, discrete and abstract state representation not depending on the door opening pattern was observed. The result of another learning with two recurrent neural networks that are for Q-values and for Actors suggested that the state representation emerged to generate appropriate Q-values.
URI
http://link.springer.com/chapter/10.1007%2F978-3-642-37374-9_2http://dspace.unimap.edu.my:80/dspace/handle/123456789/35395