And that's it. No base rules or explanations of what the information meant were provided. Instead, the program was simply left to play the game again and again, learning what earns a higher score and making its own intuitive decisions based on that gained knowledge.

This is not a lot unlike other learning AIs that Nature World News has previously discussed, such as LEVAN, a program that crowd sources information from the internet about words and pictures to - as its name implies - "Learn EVerything about ANything."

However, while LEVAN at least had to start with a base knowledge of English and some simple words, DQN started with only the simple rule that a higher score was good and a lower score was bad.

Doing it Doggy Style

What this is called is the "theory of reinforcement" and it wouldn't be unreasonable to compare this to how you train a dog. When a dog is first brought into a household, he knows absolutely nothing about his surroundings. He will be looking for social and behavioral cues, which are often reinforced with positive things, like a treat or petting, or negative reinforcement, like scolding or a sprits of lemon water in the mouth.

"Dogs are almost information junkies," John Bradshaw, an expert at the University of Bristol in the UK, recently told BBC News when commenting on a study about canine trust.

Therefore, he added, "dogs whose owners are inconsistent to them often have behavioral disorders." (Scroll to read on...)

Thankfully for DQN, Atari games are certainly not inconsistent, following the same rules-of-thumb every time, even if small adjustments have to be made per level. And as the program starts with no base knowledge, it doesn't have to "re-learn" to adapt to a new game entirely.

"It is worth noting that the games in which DQN excels are extremely varied in their nature, from side-scrolling shooters (River Raid) to boxing games (Boxing) and three-dimensional car-racing games (Enduro)," the DeepMind team wrote in their paper.

However, things are about to get much more challenging. The team now hopes to introduce their program to the first generation of 3D games from the late 90s. This is where that last category - 3D racing games - gets much more complex. It's also what Hassabis is most excited about.

"If this can drive the car in a racing game, then potentially, with a few real tweaks, it should be able to drive a real car," he excitedly told reporters. "That's the ultimate aim."

I certainly wouldn't mind a self driving car... just as long as it doesn't have to go through a few hundred crashes before learning the morning commute.

For more great nature science stories and general news, please visit our sister site, Headlines and Global News (HNGN).

- follow Brian on Twitter @BS_ButNoBS