Skip to content

QualityPoint Technologies News

Emerging Technologies News

Menu
  • About Us
  • Technology
  • Medical
  • Robots
  • Artificial Intelligence (AI)
  • 3D Printing
  • Contact Us
Menu

Huge Discount Offer: 14 ebooks + 2 courses

Researchers Incorporate Computer Vision, Uncertainty Into AI for Robotic Prosthetics

Posted on June 1, 2020

Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty.

Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on.

The framework created by the researchers allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.

The researchers focused on distinguishing between six different terrains that require adjustments in a robotic prosthetic’s behavior: tile, brick, concrete, grass, “upstairs” and “downstairs.”

If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision – it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode.

The new “environmental context” framework incorporates both hardware and software elements. The researchers designed the framework for use with any lower-limb robotic exoskeleton or robotic prosthetic device, but with one additional piece of hardware: a camera. In their study, the researchers used cameras worn on eyeglasses and cameras mounted on the lower-limb prosthesis itself. The researchers evaluated how the AI was able to make use of computer vision data from both types of camera, separately and when used together.

The researchers found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, they also found that using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two.

The most significant advance, however, is to the AI itself.

The researchers came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making.

To train the AI system, researchers connected the cameras to able-bodied individuals, who then walked through a variety of indoor and outdoor environments. The researchers then did a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments.

The AI worked well even thought it was trained by one group of people and used by somebody different.

However, the new framework has not yet been tested in a robotic device.

News Source: North Carolina State University

Share

Related News:

  1. MIT’s AI System “Pic2Recipe” Predicts recipes from photos
  2. Artificial intelligence uses internet searches to help create mind association magic trick
  3. Loihi: Intel’s New Self-Learning Chip Promises to Accelerate Artificial Intelligence
  4. An artificial intelligence algorithm developed by Stanford researchers can determine a neighborhood’s political leanings by its cars
Master RAG ⭐ Rajamanickam.com ⭐ Bundle Offer ⭐ Merch ⭐ AI Course

  • Bundle Offer
  • Hire AI Developer

Latest News

  • ​Firebase Studio: Google’s New Platform for Building AI-Powered Applications April 11, 2025
  • MIT Researchers Develop Framework to Enhance LLMs in Complex Planning April 7, 2025
  • MIT and NVIDIA Unveil HART: A Breakthrough in AI Image Generation March 25, 2025
  • Can LLMs Truly Understand Time Series Anomalies? March 18, 2025
  • Can AI tell us if those Zoom calls are flowing smoothly? March 11, 2025
  • New AI Agent, Manus, Emerges to Bridge the Gap Between Conception and Execution March 10, 2025
  • OpenAI Unveils GPT-4.5, Promising Enhanced AI Performance February 28, 2025
  • Anthropic Launches Claude Code to Revolutionize Developer Productivity February 25, 2025
  • Google Unveils Revolutionary AI Co-Scientist! February 24, 2025
  • Microsoft’s Majorana 1 Chip: Revolutionizing Quantum Computing with Topological Core Architecture February 20, 2025

Pages

  • About Us
  • Basics of 3D Printing
  • Key Innovations
  • Know about Graphene
  • Privacy Policy
  • Shop
  • Contact Us

Archives

Developed by QualityPoint Technologies (QPT)

QPT Products | eBook | Privacy

Timesheet | Calendar Generator

©2025 QualityPoint Technologies News | Design: Newspaperly WordPress Theme