Robots are steadily becoming popular because of their potential to help their human creators do most of their jobs for them, primarily because they are menial or dangerous. But what if they learn together?

However, engineers and designers know that these tasks have to be coded in order for them to work. Interestingly, it seems manual coding may not be here for long. Improvements in the field of machine learning are starting to prove that it may not take too long before robots acquire tasks they need to do on their own.

According to the Google Research Blog, experiments are starting to check the possibility of forming a "computer hive mind" through which robots can learn from one another.

Google Research Blog recounted three such experiments, each with their own intriguing prospects.

The first one involves giving robots a chance to learn from raw experience. They were tasked to open a door, in which they use solutions based on trial and error. These results were pooled in a server, and the set of commands that had the best results was sent back to the robots.

Another study involves building a predictive model from their experiences which could be used to perform tasks. The robots were allowed to interact freely with objects on the table and share their experiences. Now they used this to "forecast" what the little environment would look like with their observations.

The last study proves to be the most effective: learning with the help of humans. Human-guided robots help provide context to the actions they perform. This allowed them to quickly master the task at hand.

Regardless of the successes of the three methods, they are just three of the many experiments that learn efficiently when they are allowed to "communicate" with one another.

While the method is limited at best, the capacity to have a "have mind" can provide a big leap to robot intelligence. For instance, while one robot needed years to gather 1.5 million labeled examples in ImageNet, a group of robots can do it in weeks.

This has real applications as well, such as in different places and different settings where they can build a single, highly generalizable representation.