A new depth-sensing camera technology developed by CMU and the University of Toronto can capture 3-D information in bright sunlight and darkness, unlike conventional depth cameras.
Depth-sensing cameras, such as Microsoft’s Kinect controller for video games, have become widely used 3-D sensors. The new imaging technology addresses a major shortcoming of these cameras: their inability to work in bright light, especially sunlight.
The key is to gather only the bits of light the camera actually needs. The researchers created a mathematical model to help program these devices so the camera and its light source work together efficiently, eliminating extraneous light, or “noise,” that would otherwise wash out the signals needed to detect a scene’s contours.
They doesn’t use any new image-processing algorithms and no extra processing to eliminate the noise, as they doesn’t collect the noise. This is all done by the sensor.
One prototype based on this model synchronizes a laser projector with a common rolling-shutter camera, typically used in most smartphones, so the camera detects light only from points being illuminated by the laser as it scans across the scene. This process not only makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused light — it can capture the shape of a lightbulb that has been turned on, for instance, and see through smoke — but it also makes the camera extremely energy efficient.
This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts, and sensing for robots used to explore the moon and planets. It could also be readily incorporated into smartphones.
The researchers present their findings at SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles.
Depth cameras work by projecting a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3-D contours of the scene.
The problem is that these devices use compact projectors that operate at low power, so their faint patterns are washed out and undetectable when the camera captures ambient light from a scene.
In the prototype using a rolling-shutter camera, this process is accomplished by synchronizing the projector so that as the laser scans a particular plane, the camera accepts light only from that plane. Alternatively, if other camera hardware is used, the team’s mathematical framework can compute energy-efficient codes that optimize the amount of energy that reaches the camera.
In addition to enabling the use of Kinect-like devices to play videogames outdoors, the new approach also could be used for medical imaging, such as skin structures that otherwise would be obscured when light diffuses as it enters the skin. Likewise, the system can see through smoke despite the light scattering that usually makes it impenetrable to cameras. Manufacturers could also use the system to look for anomalies in shiny or mirrored components.
William “Red” Whittaker, University Professor of Robotics at CMU, said the system offers a number of advantages for extraterrestrial robots. Because depth cameras actively illuminate scenes, they are suitable for use in darkness, such as inside craters. In polar regions of the moon, where the sun is always at a low angle, a vision system that can eliminate the glare is essential.
Depth cameras that can operate outdoors could be useful in automotive applications, such as maintaining spacing between self-driving cars that are “platooned” — following each other at close intervals.