Researchers from Stanford University have built a prototype camera they believe can improve robotic vision and virtual reality by offering high-detail 4D images.
According to engineers from the California university, the camera, which can also capture a wide field of view of nearly 140 degrees, is capable of capturing depth, object transparency and other details in an image.
The information it can gather could help improve vision for robots in the future.
The prototype, which is a proof of concept, is the first ever single lens, wide field of view, light field camera, the university says.
Donald Dansereau, a post-doctoral fellow in electrical engineering and a key figure on the project said: “We want to consider what would be the right camera for a robot that drives or delivers packages by air.
“We’re great at making cameras for humans but do robots need to see the way humans do? Probably no.”
The engineers said that currently, robots have to move around and analyse scenes from different perspectives in order to understand certain aspects of their environment. They claim, however, that their camera could gather most of this information in one image.
The researchers, who worked with colleagues from the University of California, San Jose, compared the new camera with traditional lenses by comparing it to looking through a peephole and then a window.
“A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering,” Dansereau said.
“Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
Gordon Wetzstein, assistant professor of electrical engineering at Stanford, said the concept could have a big impact on artificial intelligence and robotics in the future.
“It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of,” he said.
“This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”