Google Chases General Intelligence With New AI That Has a Memory

Shelley Fan in Singularity Hub:

ScreenHunter_2659 Apr. 02 19.40Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task and call it quits. To learn a new task, it has to reset, wiping out previous memories and starting again from scratch.

This phenomenon, quite aptly dubbed “catastrophic forgetting,” condemns our AIs to be one-trick ponies.

Now, taking inspiration from the hippocampus, our brain’s memory storage system, researchers at DeepMind and Imperial College London developed an algorithm that allows a program to learn one task after another, using the knowledge it gained along the way.

When challenged with a slew of Atari games, the neural network flexibly adapted its strategy and mastered each game, while conventional, memory-less algorithms faltered.

More here. [Thanks to Ali Minai.]