Skip to content

QualityPoint Technologies News

Emerging Technologies News

Menu
  • About Us
  • Technology
  • Medical
  • Robots
  • Artificial Intelligence (AI)
  • 3D Printing
  • Contact Us
Menu

Huge Discount Offer: 14 ebooks + 2 courses

New system combines smartphone videos to create 4D visualizations

Posted on July 4, 2020

Researchers at Carnegie Mellon University have demonstrated that they can combine iPhone videos shot “in the wild” by separate cameras to create 4D visualizations that allow viewers to watch action from various angles, or even erase people or objects that temporarily block sight lines.

Imagine a visualization of a wedding reception, where dancers can be seen from as many angles as there were cameras, and the tipsy guest who walked in front of the bridal party is nowhere to be seen.

The videos can be shot independently from variety of vantage points, as might occur at a wedding or birthday celebration.
It also is possible to record actors in one setting and then insert them into another.

“Virtualized reality” is nothing new, but in the past it has been restricted to studio setups, such as CMU’s Panoptic Studio, which boasts more than 500 video cameras embedded in its geodesic walls. Fusing visual information of real-world scenes shot from multiple, independent, handheld cameras into a single comprehensive model that can reconstruct a dynamic 3D scene simply hasn’t been possible.

The CMU researchers worked around that limitation by using convolutional neural nets (CNNs), a type of deep learning program that has proven adept at analyzing visual data. They found that scene-specific CNNs could be used to compose different parts of the scene.

The CMU researchers demonstrated their method using up to 15 iPhones to capture a variety of scenes — dances, martial arts demonstrations and even flamingos at the National Aviary in Pittsburgh.

The method also unlocks a host of potential applications in the movie industry and consumer devices, particularly as the popularity of virtual reality headsets continues to grow.

Though the method doesn’t necessarily capture scenes in full 3D detail, the system can limit playback angles so incompletely reconstructed areas are not visible and the illusion of 3D imagery is not shattered.

News Source: Eurekalert

Share

Related News:

  1. New wearable device NailO turns your thumbnail into a wireless track pad
  2. Pepper, the world’s first personal robot that can read emotions, goes on sale
  3. Qualcomm enables wireless charging for mobile devices with metal cases
  4. MIT team creates a superfluid gas in a record-high magnetic field
Master RAG ⭐ Rajamanickam.com ⭐ Bundle Offer ⭐ Merch ⭐ AI Course

  • Bundle Offer
  • Hire AI Developer

Latest News

  • MIT Researchers Unveil New Framework to Test AI Privacy Risks in Clinical Models January 6, 2026
  • MIT Researchers Develop AI-Driven Robot That Builds Furniture From Text Prompts December 17, 2025
  • Kling O1: A New Breakthrough in AI Video Creation December 4, 2025
  • Coactive: Teaching AI to See and Understand Visual Content June 10, 2025
  • Harvard Sues Trump Administration Over International Student Ban May 23, 2025
  • Stanford Researchers Develop AI Agents That Simulate Human Behavior with High Accuracy May 23, 2025
  • ​Firebase Studio: Google’s New Platform for Building AI-Powered Applications April 11, 2025
  • MIT Researchers Develop Framework to Enhance LLMs in Complex Planning April 7, 2025
  • MIT and NVIDIA Unveil HART: A Breakthrough in AI Image Generation March 25, 2025
  • Can LLMs Truly Understand Time Series Anomalies? March 18, 2025

Pages

  • About Us
  • Basics of 3D Printing
  • Key Innovations
  • Know about Graphene
  • Privacy Policy
  • Shop
  • Contact Us

Archives

Developed by QualityPoint Technologies (QPT)

QPT Products | eBook | Privacy

Timesheet | Calendar Generator

©2026 QualityPoint Technologies News | Design: Newspaperly WordPress Theme