{"id":60659,"date":"2021-08-08T01:49:55","date_gmt":"2021-08-07T16:49:55","guid":{"rendered":"https:\/\/smilegate.ai\/?p=60659"},"modified":"2021-08-08T01:53:15","modified_gmt":"2021-08-07T16:53:15","slug":"rl-toward-agi","status":"publish","type":"post","link":"https:\/\/smilegate.ai\/en\/2021\/08\/08\/rl-toward-agi\/","title":{"rendered":"Reinforcement learning aimed at AGI"},"content":{"rendered":"

[Prior Research Team Hyunwoo Choi]<\/p>\n\n\n\n

In May, DeepMind published a reinforcement learning paper titled 'Reward is Enough'. The authors give examples of 'a squirrel trying to increase satiety' and 'a kitchen robot trying to keep clean', and if appropriate rewards are defined, various abilities related to intelligence (cognition, memory, planning, movement, etc.) Claimed to be able to use it naturally and make decisions. <\/p>\n\n\n\n

\"Fig.<\/figure>\n\n\n\n

People often learn the ability to judge situations and make decisions on their own through various trials and errors to achieve something. Because reinforcement learning itself resembles these human learning principles, this claim that an appropriate reward system will play a key role in achieving AGI has some justification. However, although it may have been due to the somewhat provocative (?) title, there were also several skeptical views such as 'It is an assertion that has no substance' and 'It is difficult to define a clear reward in real life problems'.<\/p>\n\n\n\n

XLand: A New Reinforcement Learning Environment<\/strong><\/p>\n\n\n\n

In fact, Reward is Enough may have been attacked more because it was a claim that did not include concrete implementation results. As if conscious of this gaze, DeepMind recently announced the results of an experiment in a new reinforcement learning environment called XLand.<\/p>\n\n\n\n

Currently, AI research is moving from a method of fine-tuning a pre-trained model to a desired task, to a few-shot or even zero-shot learning that secures performance with little data. Reinforcement learning, on the other hand, seemed to have the fatal drawback of not being able to utilize a pre-trained model to learn a new task and always having to re-learn it from scratch. Against this background, reinforcement learning can also be applied to new tasks by generalizing knowledge once acquired! That is the key result of this presentation. Obviously, if efficient learning for a new task is possible, it can be considered as a form very close to AGI.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

To demonstrate this, the authors built an XLand engine capable of rigid body physics simulations and automatically created various environments and targets for training agents. The goal may be for the agent to go "near the purple cube", or it may be given a more complex goal, such as "near the purple cube or put the yellow sphere on the red floor". There are also competing goals, such as hide-and-seek, \u201cto see your opponent and not let them see you\u201d. <\/p>\n\n\n\n

Each agent has played approximately 700,000 unique games across 4,000 unique worlds within XLand, and has undergone 200 billion steps of learning across 3.4 million unique tasks. The results showed that even on new tasks that were not previously trained, significant performance could be obtained with only about 30 minutes of fine-tuning. On the other hand, the performance of the model trained from scratch was almost zero (though it is a natural result since the comparison target has trained 200 billion times).<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

The authors argue that the actions the agent takes to solve each problem may appear accidental, but are nevertheless consistent, indicating that the agent has a clear understanding of the reward system.<\/p>\n\n\n\n

Can reinforcement learning become AGI?<\/strong><\/p>\n\n\n\n

As mentioned above, the authors showed the potential that pretrained reinforcement learning models can quickly adapt to a variety of novel tasks. However, it may be a bit difficult to break through the existing critical views that it has only greatly simplified the real world problem.<\/p>\n\n\n\n

The authors also introduced a few examples of failures that were not learned well, which is quite interesting. First, if a crack (trap) that did not appear in the pre-learning appeared, the agent could not assume that it could fall into the trap, so it continued to fail to reach the goal. It can solve the problem of going to the upper floor by making a slope using surrounding objects, but it cannot solve the problem of making multiple slopes consecutively (this part is definitely because the dimension of trial and error is too large to solve, the limitation of reinforcement learning appears. Seems to). In addition, when the goals of the two agents are different, the other agent's goals were not understood, so they could not achieve their own goals.<\/p>\n\n\n\n

Even though reinforcement learning is a technique that imitates human learning patterns, there seem to be many curious parts about how humans can easily solve problems that are difficult for AI to solve. Even in a simulation environment where the agent has few action options, it seems that the limitations of AGI are still clear in that it has been trained 200 billion times.<\/p>\n\n\n\n

Still, you might think that we've made a lot of progress just by being the first (perhaps) to show how flexible reinforcement learning models can be used. I am very curious to see if further research can find clues to solve the failure cases \ud83d\ude42<\/p>\n\n\n\n

References<\/strong><\/p>\n\n\n\n

  • https:\/\/deepmind.com\/research\/publications\/2021\/Reward-is-Enough<\/li>
  • https:\/\/deepmind.com\/blog\/article\/generally-capable-agents-emerge-from-open-ended-play<\/li><\/ul>\n\n\n\n

    <\/p>\n

    <\/span><\/div>","protected":false},"excerpt":{"rendered":"

    [\uc120\ud589\uc5f0\uad6c\ud300 \ucd5c\ud604\uc6b0] \uc9c0\ub09c 5\uc6d4 \ub525\ub9c8\uc778\ub4dc\ub294 ‘Reward is Enough’\ub77c\ub294 \uc81c\ubaa9\uc758 \uac15\ud654\ud559\uc2b5 \ub17c\ubb38\uc744 \ubc1c\ud45c\ud588\uc2b5\ub2c8\ub2e4. \uc800\uc790\ub4e4\uc740 ‘\ud3ec\ub9cc\uac10\uc744 \ub192\uc774\ub824\ub294 \ub2e4\ub78c\uc950’\uc640 ‘\uccad\uacb0\uc744 \uc720\uc9c0\ud558\ub824\ub294 \uc8fc\ubc29\ub85c\ubd07’\uc758 \uc608\uc2dc\ub97c \ub4e4\uc5b4, \uc801\uc808\ud55c \ubcf4\uc0c1\uc774 \uc815\uc758\ub41c\ub2e4\uba74 \uc774\ub97c \uadf9\ub300\ud654\ud558\ub294 \uacfc\uc815\uc5d0\uc11c \uc9c0\ub2a5\uacfc \uad00\ub828\ub41c (\uc778\uc9c0, \uae30\uc5b5, \uacc4\ud68d, \uc6b4\ub3d9 \ub4f1\uc758) \ub2e4\uc591\ud55c \ub2a5\ub825\uc744 \uc790\uc5f0\uc2a4\ub7fd\uac8c \ud65c\uc6a9\ud558\uace0 \uc758\uc0ac \uacb0\uc815\uc744 \ub0b4\ub9b4 \uc218 \uc788\ub2e4\uace0 \uc8fc\uc7a5\ud558\uc600\uc2b5\ub2c8\ub2e4. \uc0ac\ub78c\ub4e4\ub3c4 \ubb34\uc5b8\uac00\ub97c \ub2ec\uc131\ud558\uae30 \uc704\ud55c \uc0c1\ud669 \ud310\ub2e8 \ubc0f \uc758\uc0ac \uacb0\uc815 \ub2a5\ub825\uc740 \ub2e4\uc591\ud55c…<\/p>\n

    <\/span><\/div>","protected":false},"author":1,"featured_media":60666,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rank_math_lock_modified_date":false,"footnotes":""},"categories":[19],"tags":[],"class_list":["post-60659","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech04","category-19","description-off"],"_links":{"self":[{"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/posts\/60659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/comments?post=60659"}],"version-history":[{"count":4,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/posts\/60659\/revisions"}],"predecessor-version":[{"id":60667,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/posts\/60659\/revisions\/60667"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/media\/60666"}],"wp:attachment":[{"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/media?parent=60659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/categories?post=60659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/smilegate.ai\/en\/wp-json\/wp\/v2\/tags?post=60659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}