Google is slowly becoming (if it is not already) the go-to place for everyone who needs "smart" solutions to problems. However, researchers at Google's DeepMind may have found a way to make their AI smarter.

Recent developments at Google's DeepMind point toward a new deep-learning algorithm. This lets their artificial intelligence system recognize objects from a single example.

Google DeepMind is the U.K.-based subsidiary of Alphabet, the parent company of Google. This is Google's not-so-secret lab for AI projects that range from translation and map guidance to machine learning.

DeepMind is now capable of recognizing objects on images, handwriting, and even language through this "one-shot learning" algorithm.

This is an amazing duplication of man's ability to recognize objects after one or two times of seeing them. Machines previously needed thousands of examples to become familiar with an object or a word.

The latter is found to be normally time-consuming and expensive. For instance, driverless car AIs need to study thousands of cars in order to work. It seems impractical for a robot to navigate an unfamiliar home for countless of hours before getting familiar with it.

But this time, it seems Google's DeepMind has found a way around it.

According to Technology Review, Google DeepMind's Orion Vinyals found a solution by adding a "memory component" to DeepMind. Vinyals added a "neural network" that recognizes things by adjusting the sensitivity of layers of interconnected components.

Like the neurons in the brain, they need to see tons of images to see the connections between them.

Curious readers can see this database in a site called ImageNet. It still needs to analyze hundreds of images, but it can learn what new objects are by recognizing what makes it different to the others.

For instance, the algorithm was able to recognize images of dogs even after one example.

The implications of such a system would be useful, especially to Google, as it can finally quickly learn what a search term means.

While other one-shot learning systems have been developed in the past, they are not as compatible with deep-learning systems as Google DeepMind's. However, deep-learning systems are becoming more capable with memory mechanisms. According to another Technology Review article, Google DeepMind recently achieved a feat that merits it "flexible" memory.

This time, Google DeepMind is also capable of interpreting data live without having to need input. This helps it navigate subways just by analyzing simple maps.