Scientists at Cornell University's Personal Robotics Lab developed a robot that can utilize a prediction algorithm to correctly anticipate human actions and respond with helpful tasks like opening a refrigerator door or delivering and pouring a beer.
Call it one small beer for man, one giant leap for successful robotics engineering.
The robot scientist utilized a common household piece of technology as the "eyes" of the robot: a Microsoft Kinect 3-D camera, a $100 add-on peripheral for the Xbox 360 game console. (Microsoft, along with U.S. Army Research Office, the Alfred E. Sloan Foundation, financed the project.)
Using the Kinect and a database of 3D videos, the robot can identify the activities it sees, consider all the possible uses for objects in the scene and determine how those uses fit with what everything else going on.
From millions of possible combinations, the robot reduces the data down to a number of possible scenarios such as eating, drinking, cleaning or putting away and chooses the most probable based on what the camera sees. As the action continues, the robot also continues to update and refine its predictions.
"We extract the general principles of how people behave," said Ashutosh Saxena, a professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it."
The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he said.
While the time it takes the robot to open a door or pour a beer is longer than it would take a person to just perform the task without robotic assistance, the fact that scientists have successfully programmed a robot to anticipate human actions to better assist us is a feat.
In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.
"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."
A video of the robot in action is below.
As you can see, it's a great upgrade from other late model beer pouring robots.
© 2024 NatureWorldNews.com All rights reserved. Do not reproduce without permission.