With the rapid advances in handheld devices and easy-to-use photo-editing applications, people have been accustomed to snapping their own photos from their phones or tablets for years now. Some of us also are getting savvier and more creative with how photos are shared or posted.
In a new work from Facebook researchers, users are now able to turn the photos they take on their devices into 3D images within seconds. The team will demonstrate their innovative end-to-end system for creating and viewing 3D photos at SIGGRAPH 2020. The conference, which will take place virtually this year starting 17 August, gathers a diverse network of professionals who approach computer graphics and interactive techniques from different perspectives.
The 2D-to-3D photo technique has been available as a “photos feature” on Facebook since late 2018. To take advantage of this feature, originally Facebook users were required to capture photos with a phone equipped with a dual-lens camera. Now, the Facebook team has added an algorithm that automates depth estimation from the 2D input image, and the technique can be utilized directly on any mobile device, expanding the method beyond just the Facebook app and without the requirement of having a dual-lens camera.
This advance makes 3D photo technology easily accessible for the first time to the many millions of people who use single-lens camera phones or tablets. It also allows everyone to experience decades-old family photos and other treasured images in a new way, by converting them to 3D.
Over the last century, photography has gone through several tech ‘upgrades’ that increased the level of immersion. Initially, all photos were black and white and grainy, then came color photography, and then digital photography brought us higher quality and better-resolution images. Finally, these days we have 3D photography, which makes photos feel a lot more alive and real.

The new framework provides users with a more practical approach to 3D photography, addressing several design objectives. Users can access the new technology via their own mobile device; the real-time conversion from a 2D input image to 3D is seamless, requiring no sophisticated photographic skills by the user and only takes a few seconds to process; and the method is robust enough to work on almost any photo — new or one previously taken.
To refine the new system, the researchers trained a convolutional neural network (CNN) on millions of pairs of public 3D images and their accompanying depth maps and leveraged mobile-optimization techniques developed by Facebook AI. The framework also incorporates texture inpainting and geometry capture of the 2D input image to convert it into 3D, resulting in images that are more active and lively. Each automated step that converts a user’s 2D photo, directly from their mobile device, is optimized to run on a variety of makes and models and is able to work with a device’s limited memory and data-transfer capabilities. The best part? Users get instant gratification, as the 3D results are literally generated in a matter of seconds.
News Source: Eurekalert