How can you make a computer program learn as quickly, if not quicker, than man? Give it the internet, and wait to see what it does. Using the web, experts now want computers to figure things out for themselves.

Earlier this month, Nature World News wrote about some of the most successful imitators of humanity in the world - including a chatbot which successfully fooled 10 out of 30 people into thinking that it was a 13-year-old Ukrainian boy named "Eugene Goostman."

Now, we're going to step away from the imitators to discuss programs that actually "learn" like us... sort of.

A team of computer scientists from the University of Washington (UW) recently presented some remarkable findings at the 2014 Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation in Hong Kong.

"We're trying to create a method for a robot to seek help from the whole world when it's puzzled by something," Rajesh Rao, director of the Center for Sensorimotor Neural Engineering at the UW, said in a statement. "This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world."

The team theorized that since the most proven way for a robot to "learn" an action is by imitating human action, they could expedite the learning process by exposing that robot to hundreds or thousands of examples of human action in an incredibly short amount of time via the internet.

To test this, the researchers built several dozen small shapes (a tree, snake, car, etc.) out of colored blocks. Interestingly, with just one example of the action per shape, the robot was largely unable to replicate the building procedure.

Then the team hired people on Amazon Mechanical Turk - a crowdsourcing site - to build similar models presented in a simple image. Using more than 100 examples per shape, the robot was able to quickly discern the best approach to building each model based on its difficulty level. The resulting builds were very similar to the original - if slightly more simple - due to the limitations of the robot's own "hands."

The researchers call this type of crowdsourced learning "goal based imitation," and it's a startling first step towards Artificial Intelligence (AI) that will need very little initial programming - compared to programs like Goostman, which require each and every response to be dictated by a lengthy set of rules and pre-programmed scenarios.

Learning Everything

And this block building robot is not alone. There is in fact another crowd-learning program even on the same campus. The Allen Institute for Artificial Intelligence (AIAI) recently teamed up with UW researchers to craft LEVAN - which stands for "Learn EVerything about ANything" - which learns to recognize things from online image searches.

"It is all about discovering associations between textual and visual data," Ali Farhadi, a member of the LEVAN team, said in a statement. "The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them."

For instance, LEVAN was first coded to recognize a cat (the animal) on the internet from the very beginning. Combing through thousands of image searches involving the word "cat," LEVAN learned to associate new words with different types of "cat." Currently LEVAN has processed 165,792 images of cats and has created 244 new subcategories and similarity rules.

Now, not only does LEVAN know what a house cat is, it also can show you specific images of a Tom Cat fighter jet, a "sno cat" vehicle, and even a "bread cat" - a bizarre internet phenomenon where people post pictures of house cats with their faces shoved through slices of bread.

Whether we really want people teaching computers how to make a "bread cat" remains to be seen.

The Thing About Cats

Interestingly, Google has taken this kind of internet-based learning even further. Scientists from Google's mysterious "X lab" and researchers from Cornell University have crafted a program that can identify images of cats with 74.8 percent accuracy despite never having been told what a cat looks like to begin with.

"Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not," the team says in its paper, Building high-level features using large scale unsupervised learning, which it will present at the International Conference on Machine Learning in Edinburgh at the end of this month.

According to the paper, the researchers simply left the image analysis software to try to identify base concepts on its own, learning common image identification cues called "edges" along the way by drawing parallels between shape patterns and search words. However, as things stand, the program can only identify people, cats and human body parts - the most common images on the internet.

Andrew Ng, an expert of artificial "deep learning" and human brain simulation involved in the project, told Wired that they are still a very long way away from crafting a program that can teach itself to identify anything.

"The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak [for itself]," he explained.

Unfortunately, Ng admitted, there are only so many subjects, like cats, with "tons" of internet data. Without a huge amount of examples to draw parallels from, the program cannot conclusively learn anything new.

Turning Things Around

But what about people learning from crowd-sourced programs? That's what researchers from the University of Rochester and the University of California, Berkeley asked in 2012, when they crafted the program Chorus.

Unlike most web-integrated, crowd-sourcing programs seen within this decade, Chorus does not aim to use its data to learn. Instead it reapplies its gathered information to teach.

Much like the web-famous chatbot Cleverbot, Chorus uses the crowdsourced responses from numerous real people to craft the most appropriate responses. However, instead of employing Cleverbot's strategy of saving these responses for future conversations, Chorus works in real-time, providing optimal answers to people with real questions.

In this sense, Chorus is not exactly artificial intelligence, and more so just an aggregator and moderator of answers, but it's hard to miss the benefits of having such a crowd-sourced program around.

While Eugene Goostman and Cleverbot could seem like people, they certainly won't have the right answers for people looking for help.

"It shows how a crowd-powered system that is relatively simple can do something that AI has struggled to do for decades," Jeffrey Bigham, an assistant professor at the University of Rochester, and a member of the research team that created Chorus, told MIT's prestigious Technology Review.

According to a paper detailing how Chorus works, researchers suspect that technology like this could utterly revamp now-frustratingly convoluted automated systems.

Imagine a day when talking to automated tech support actually involves talking to the combined expertise of dozens of professionals at once. Not a bad future right?

"What we're really interested in is when a crowd as a collective can do better than even a high-quality individual," Bigham explained.

Still, it has been two years since Chorus first debuted, and there's plenty of kinks to work out with such a concept. Expect a lot more waiting until the smartest tech support in the world can become a reality.