Skip to content

QualityPoint Technologies News

Emerging Technologies News

Menu
  • About Us
  • Technology
  • Medical
  • Robots
  • Artificial Intelligence (AI)
  • 3D Printing
  • Contact Us
Menu

Huge Discount Offer: 14 ebooks + 2 courses

MIT’s finger-shaped sensor enables more dexterous robots

Posted on October 4, 2023

MIT engineers develop a long, curved touch sensor that could enable a robot to grasp and manipulate objects in multiple ways.

Imagine grasping a heavy object, like a pipe wrench, with one hand. You would likely grab the wrench using your entire fingers, not just your fingertips. Sensory receptors in your skin, which run along the entire length of each finger, would send information to your brain about the tool you are grasping.

In a robotic hand, tactile sensors that use cameras to obtain information about grasped objects are small and flat, so they are often located in the fingertips. These robots, in turn, use only their fingertips to grasp objects, typically with a pinching motion. This limits the manipulation tasks they can perform.

MIT researchers have developed a camera-based touch sensor that is long, curved, and shaped like a human finger. Their device provides high-resolution tactile sensing over a large area. The sensor, called the GelSight Svelte, uses two mirrors to reflect and refract light so that one camera, located in the base of the sensor, can see along the entire finger’s length.

In addition, the researchers built the finger-shaped sensor with a flexible backbone. By measuring how the backbone bends when the finger touches an object, they can estimate the force being placed on the sensor.

They used GelSight Svelte sensors to produce a robotic hand that was able to grasp a heavy object like a human would, using the entire sensing area of all three of its fingers. The hand could also perform the same pinch grasps common to traditional robotic grippers.

Cameras used in tactile sensors are limited by their size, the focal distance of their lenses, and their viewing angles. Therefore, these tactile sensors tend to be small and flat, which confines them to a robot’s fingertips.

With a longer sensing area, one that more closely resembles a human finger, the camera would need to sit farther from the sensing surface to see the entire area. This is particularly challenging due to size and shape restrictions of a robotic gripper.

The researchers solved this problem using two mirrors that reflect and refract light toward a single camera located at the base of the finger.

GelSight Svelte incorporates one flat, angled mirror that sits across from the camera and one long, curved mirror that sits along the back of the sensor. These mirrors redistribute light rays from the camera in such a way that the camera can see along the entire finger’s length.

To optimize the shape, angle, and curvature of the mirrors, the researchers designed software to simulate reflection and refraction of light.

With this software, the researchers can easily play around with where the mirrors are located and how they are curved to get a sense of how well the image will look after they actually make the sensor.

Credit: MIT

The mirrors, camera, and two sets of LEDs for illumination are attached to a plastic backbone and encased in a flexible skin made from silicone gel. The camera views the back of the skin from the inside; based on the deformation, it can see where contact occurs and measure the geometry of the object’s contact surface.

In addition, the red and green LED arrays give a sense of how deeply the gel is being pressed down when an object is grasped, due to the saturation of color at different locations on the sensor.

The researchers can use this color saturation information to reconstruct a 3D depth image of the object being grasped.

The sensor’s plastic backbone enables it to determine proprioceptive information, such as the twisting torques applied to the finger. The backbone bends and flexes when an object is grasped. The researchers use machine learning to estimate how much force is being applied to the sensor, based on these backbone deformations.

However, combining these elements into a working sensor was no easy task.

It took a lot of experiments to make a sensor that actually works.

Once they had perfected the design, the researchers tested the GelSight Svelte by pressing objects, like a screw, to different locations on the sensor to check image clarity and see how well it could determine the shape of the object.

Credit: MIT

They also used three sensors to build a GelSight Svelte hand that can perform multiple grasps, including a pinch grasp, lateral pinch grasp, and a power grasp that uses the entire sensing area of the three fingers. Most robotic hands, which are shaped like parallel jaw drippers, can only perform pinch grasps.

A three-finger power grasp enables a robotic hand to hold a heavier object more stably. However, pinch grasps are still useful when an object is very small. Being able to perform both types of grasps with one hand would give a robot more versatility.

Moving forward, the researchers plan to enhance the GelSight Svelte so the sensor is articulated and can bend at the joints, more like a human finger.

Optical-tactile finger sensors allow robots to use inexpensive cameras to collect high-resolution images of surface contact, and by observing the deformation of a flexible surface the robot estimates the contact shape and forces applied. This work represents an advancement on the GelSight finger design, with improvements in full-finger coverage and the ability to approximate bending deflection torques using image differences and machine learning. Improving a robot’s sense of touch to approach human ability is a necessity and perhaps the catalyst problem for developing robots capable of working on complex, dexterous tasks.

News Source: MIT

Share

Related News:

  1. Self-folding nanotech creates world’s smallest origami bird
  2. Scientists Create the Next Generation of Living Robots
  3. Researchers develop origami-inspired robot
  4. A DNA nanorobot is programmed to pick up and sort molecules into predefined regions
Master RAG ⭐ Rajamanickam.com ⭐ Bundle Offer ⭐ Merch ⭐ AI Course

  • Bundle Offer
  • Hire AI Developer

Latest News

  • Stanford Researchers Develop AI Agents That Simulate Human Behavior with High Accuracy May 23, 2025
  • ​Firebase Studio: Google’s New Platform for Building AI-Powered Applications April 11, 2025
  • MIT Researchers Develop Framework to Enhance LLMs in Complex Planning April 7, 2025
  • MIT and NVIDIA Unveil HART: A Breakthrough in AI Image Generation March 25, 2025
  • Can LLMs Truly Understand Time Series Anomalies? March 18, 2025
  • Can AI tell us if those Zoom calls are flowing smoothly? March 11, 2025
  • New AI Agent, Manus, Emerges to Bridge the Gap Between Conception and Execution March 10, 2025
  • OpenAI Unveils GPT-4.5, Promising Enhanced AI Performance February 28, 2025
  • Anthropic Launches Claude Code to Revolutionize Developer Productivity February 25, 2025
  • Google Unveils Revolutionary AI Co-Scientist! February 24, 2025

Pages

  • About Us
  • Basics of 3D Printing
  • Key Innovations
  • Know about Graphene
  • Privacy Policy
  • Shop
  • Contact Us

Archives

Developed by QualityPoint Technologies (QPT)

QPT Products | eBook | Privacy

Timesheet | Calendar Generator

©2025 QualityPoint Technologies News | Design: Newspaperly WordPress Theme