Researchers from Massachusetts Institute of Technology's Media Lab have developed a $500 3-D nano-camera capable of operating the speed of light.
The camera uses "Time of Flight" technology like that used in Microsoft's second-generation Kinect device, in which an object's location is determined by the time it takes for a light signal to reflect off a surface and back to the sensor.
The device is an improvement upon other devices, however, in that it is able to maintain accuracy despite rain, fog or translucent objects.
Conventional "Time of Flight" cameras fire a light signal that bounces off an object and returns to hit the pixel, using the known speed of light to calculate the distance and thus the depth of the object being reflected.
Changing environments, semitransparent surfaces and edges can all create multiple reflections that confuse the signal.
In contrast, the new device uses an encoding technique used by telecommunications to calculate the distance the signal has to travel.
"We use a new method that allows us to encode information in time," Ramesh Raskar, an associate professor of media arts and sciences of the Media Lab's Camera Culture group, said in a statement. "So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."
The camera uses a similar concept to that used to clear otherwise blurry photographs, according to graduate student Ayush Bhandari.
"People with shaky hands tend to take blurry photographs with their cellphones because several shifted versions of the scene smear together," Bhandari said. "By placing some assumptions on the model -- for example that much of this blurring was caused by a jittery hand -- the image can be unsmeared to produce a sharper picture."
Among other things, the device could be used in medical imaging and collision-avoidance detectors for cars, as well as to better motion tracking and gesture-recognition in interactive gaming.