dc.contributor.author | Sawatsubashi, Yoshito | |
dc.contributor.author | Mohamad Faizal, Samsudin | |
dc.contributor.author | Shibata, Katsunari | |
dc.date.accessioned | 2014-06-11T13:45:49Z | |
dc.date.available | 2014-06-11T13:45:49Z | |
dc.date.issued | 2013 | |
dc.identifier.citation | Advances in Intelligent Systems and Computing, vol. 208, 2013, pages 13-21 | en_US |
dc.identifier.isbn | 978-364237373-2 | |
dc.identifier.issn | 2194-5357 | |
dc.identifier.uri | http://link.springer.com/chapter/10.1007%2F978-3-642-37374-9_2 | |
dc.identifier.uri | http://dspace.unimap.edu.my:80/dspace/handle/123456789/35395 | |
dc.description | Link to publisher's homepage at http://link.springer.com/ | en_US |
dc.description.abstract | "Concept" is a kind of discrete and abstract state representation, and is considered useful for efficient action planning. However, it is supposed to emerge in our brain as a parallel processing and learning system through learning based on a variety of experiences, and so it is difficult to be developed by hand-coding. In this paper, as a previous step of the "concept formation", it is investigated whether the discrete and abstract state representation is formed or not through learning in a task with multi-step state transitions using Actor-Q learning method and a recurrent neural network. After learning, an agent repeated a sequence two times, in which it pushed a button to open a door and moved to the next room, and finally arrived at the third room to get a reward. In two hidden neurons, discrete and abstract state representation not depending on the door opening pattern was observed. The result of another learning with two recurrent neural networks that are for Q-values and for Actors suggested that the state representation emerged to generate appropriate Q-values. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Springer-Verlag | en_US |
dc.subject | Action planning | en_US |
dc.subject | Concept formation | en_US |
dc.subject | Continuous input | en_US |
dc.subject | Hidden neurons | en_US |
dc.title | Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task | en_US |
dc.type | Article | en_US |
dc.contributor.url | bashis8@yahoo.co.jp | en_US |
dc.contributor.url | ballack83@hotmail.co.jp | en_US |
dc.contributor.url | faizalsamsudin@unimap.edu.my | en_US |