Deep Mind Learns Memory
Deep Mind Learns Memory
H/T Grizwald Grim
Originally shared by Ward Plunet
DeepMind's new algorithm adds 'memory' to AI
"Previously, we had a system that could learn to play any game, but it could only learn to play one game," James Kirkpatrick, a research scientist at DeepMind and the lead author of its new research paper, tells WIRED. "Here we are demonstrating a system that can learn to play several games one after the other". The work, published in the Proceedings of the National Academy of Sciences journal, explains how DeepMind's AI can learn in sequences using supervised learning and reinforcement learning tests. This is also explained in a blog post from the company. "The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence," the computer scientists write in the paper. Kirkpatrick says a "significant shortcoming" in neural networks and artificial intelligence has been its inability to transfer what it has learned from one task to the next. The group says it has been able to show "continual learning" that's based on 'synaptic consolidation'. In the human brain, the process is described as "the basis of learning and memory".
http://www.wired.co.uk/article/deepmind-atari-learning-sequential-memory-ewc
H/T Grizwald Grim
Originally shared by Ward Plunet
DeepMind's new algorithm adds 'memory' to AI
"Previously, we had a system that could learn to play any game, but it could only learn to play one game," James Kirkpatrick, a research scientist at DeepMind and the lead author of its new research paper, tells WIRED. "Here we are demonstrating a system that can learn to play several games one after the other". The work, published in the Proceedings of the National Academy of Sciences journal, explains how DeepMind's AI can learn in sequences using supervised learning and reinforcement learning tests. This is also explained in a blog post from the company. "The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence," the computer scientists write in the paper. Kirkpatrick says a "significant shortcoming" in neural networks and artificial intelligence has been its inability to transfer what it has learned from one task to the next. The group says it has been able to show "continual learning" that's based on 'synaptic consolidation'. In the human brain, the process is described as "the basis of learning and memory".
http://www.wired.co.uk/article/deepmind-atari-learning-sequential-memory-ewc
Good word
ReplyDelete