MIT have developed a monocular SLAM-aware object recognition system that is able to achieve considerably stronger recognition performance, as compared to classical object recognition systems that function on a frame-by-frame basis.
By incorporating several key ideas including multi-view object proposals and efficient feature encoding methods, the system is able to detect and robustly recognize objects in its environment using a single RGB camera.
MIT’s John Leonard’s group specializes in SLAM, or simultaneous localization and mapping, the technique whereby mobile autonomous robots map their environments and determine their locations. They have presented their paper at the Robotics Science and Systems conference last week.
The system uses SLAM information to augment existing object-recognition algorithms. Its performance should thus continue to improve as computer-vision researchers develop better recognition software, and roboticists develop better SLAM software.
The newer object-recognition systems first try to identify the boundaries between objects. On the basis of a preliminary analysis of color transitions, they’ll divide an image into rectangular regions that probably contain objects of some sort. Then they’ll run a recognition algorithm on just the pixels inside each rectangle. To get a good result, a classical object-recognition system may have to redraw those rectangles thousands of times.
From some perspectives, for instance, two objects standing next to each other might look like one, particularly if they’re similarly colored. The system would have to test the hypothesis that lumps them together, as well as hypotheses that treat them as separate.
Because a SLAM map is three-dimensional, however, it does a better job of distinguishing objects that are near each other than single-perspective analysis can.
The system devised by Pillai and Leonard uses the SLAM map to guide the segmentation of images captured by its camera before feeding them to the object-recognition algorithm. It thus wastes less time on spurious hypotheses.
More important, the SLAM data let the system correlate the segmentation of images captured from different perspectives. Analyzing image segments that likely depict the same objects from different angles improves the system’s performance.
Using machine learning, other researchers have built object-recognition systems that act directly on detailed 3-D SLAM maps built from data captured by cameras, such as the Microsoft Kinect, that also make depth measurements. But unlike those systems, Pillai and Leonard’s system can exploit the vast body of research on object recognizers trained on single-perspective images captured by standard cameras.
Moreover, the performance of this system is already comparable to that of the systems that use depth information. And it’s much more reliable outdoors, where depth sensors like the Kinect’s, which depend on infrared light, are virtually useless.
One of the central challenges in SLAM is what roboticists call “loop closure.” As a robot builds a map of its environment, it may find itself somewhere it’s already been — entering a room, say, from a different door. The robot needs to be able to recognize previously visited locations, so that it can fuse mapping data acquired from different perspectives.
Object recognition could help with that problem. If a robot enters a room to find a conference table with a laptop, a coffee mug, and a notebook at one end of it, it could infer that it’s the same conference room where it previously identified a laptop, a coffee mug, and a notebook in close proximity.
“This work shows very promising results on how a robot can combine information observed from multiple viewpoints to achieve efficient and robust detection of objects.”