News

AI Researchers Are Teaching Robots to Mimic Human Dexterity

AI Researchers Are Teaching Robots to Mimic Human Dexterity

Researchers are making monumental strides in enhancing robotic dexterity and tactile sensing. The goal? Robots that can manipulate objects with the finesse and precision of human hands.

At the forefront of this research field is a groundbreaking study from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). The team tackled the intricate challenge of contact-rich manipulation, a domain where robots interact with objects in complex ways.

“The main challenge for planning through contact is the hybrid nature of contact dynamics,” the study notes.

Reinforcement learning is a technique used by AI to train a model based on rewards and punishments. The researchers here used a type of reinforcement learning method called “smoothing” to simplify the way living beings go through the process of sensing things and make it replicable by a primitive robot.

What’s more, their method, combined with sampling-based motion planning, paves the way for more intricate manipulation involving numerous contact points. In other words: Using two hands to manipulate and interact with an object. Their experiments have showcased the ability to generate intricate movements in mere minutes, a significant leap from the hours demanded by traditional RL methods.

More Robots Learning With AI

Parallel to this, the University of Bristol in the UK unveiled “Bi-Touch,” a pioneering dual-arm tactile robotic system. “We propose a suite of bimanual manipulation tasks tailored towards tactile feedback: bi-pushing, bi-reorienting, and bi-gathering.” the research paper reads. This system, through sim-to-real deep reinforcement learning, can master intricate manipulation tasks, such as collaboratively pushing objects and skillfully rotating them.

On the West Coast, Stanford University researchers are teaching robots complex tasks using human video demonstrations. Their method, which employs masked eye-in-hand camera footage, sidesteps the need for costly image translations between human and robot domains.

“Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for robotic teleoperation expertise,” the researchers argue in their academic paper.

Basically, just as people learn by watching YouTube tutorials, these researchers are using videos to teach their robots how to do some things, and their approach has boosted success rates in new test settings by an impressive 58% compared to traditional robot data training.

These groundbreaking studies collectively pave the way for robots capable of nuanced object manipulation akin to human abilities. Such advancements could redefine industries, from manufacturing lines to operating rooms. Imagine a surgical procedure where a robot, powered by AI, assists a surgeon, enhancing precision and outcomes.

So, sci-fi aficionados out there, fear not. Friendly helper robots need not rule out the possibility of humanity coexisting with the occasional charming robot curmudgeon. As long as the robots stick to bickering with their human companions rather than eliminating them, we should be in the clear.

Stay on top of crypto news, get daily updates in your inbox.



Source: https://decrypt.co/153646/ai-researchers-are-teaching-robots-to-mimic-human-dexterity

Leave a Reply

Your email address will not be published. Required fields are marked *