Written by 4:46 pm OpenAI

**Revolutionizing Robotics: OpenAI’s Subsidiary Develops AI Model for Human-Like Task Learning**

But can it graduate from the lab to the warehouse floor?

In the summer of 2021, OpenAI discreetly closed down its robotics division, citing a lack of essential data needed to train robots using artificial intelligence, which hindered progress.

Three former OpenAI research scientists, who established Covariant in 2017, now claim to have resolved this issue. They introduced a groundbreaking system named RFM-1 that merges the reasoning capabilities of large language models with the physical agility of advanced robots.

RFM-1 underwent training using extensive data gathered from Covariant’s assortment of item-picking robots utilized by companies like Crate & Barrel and Bonprix worldwide, in addition to textual content and videos sourced from the internet. The model is set to be released to Covariant’s clientele in the upcoming months, with expectations for enhanced efficiency and competence as it operates in real-world scenarios.

During a recent demonstration, Covariant’s founders, Peter Chen and Pieter Abbeel, exhibited how users can interact with the model through various inputs: text, images, video, robot instructions, and measurements. For instance, by presenting an image of a bin filled with sports equipment and instructing it to retrieve the pack of tennis balls, users can observe the robot executing the task and predicting the outcome post-action.

This advancement signifies a significant shift in the adaptability of robots to their surroundings, relying on training data rather than intricate, task-specific coding prevalent in the previous generation of industrial robots. It also paves the way for worksites where instructions can be issued in natural language, without the constraints of human labor.

Lerrel Pinto, a researcher specializing in general-purpose robotics and AI at New York University, commended Covariant’s achievement in deploying a multimodal robot capable of communication through various modes at scale. To stay ahead in the competitive landscape, Covariant must acquire substantial data to enable the robot to function effectively in diverse environments, interacting with different stimuli and challenges.

While Covariant asserts that the model exhibits “human-like” reasoning abilities, it acknowledges certain limitations. During a live demonstration, the robot faced challenges when asked to “return the banana to Tote Two,” struggling to follow the task due to a lack of familiarity with the concept. This underscores the importance of robust training data for optimal performance.

The introduction of Covariant’s new model exemplifies a paradigm shift in robotics, transitioning from manual instruction-based learning to observational learning akin to human cognition. By leveraging millions of observations, researchers aim to equip robots with the flexibility to tackle a wide array of tasks effectively.

As the landscape of AI-powered robotics expands, with companies like Figure AI and Boston Dynamics integrating AI into their systems, the convergence of machine learning and robotics is poised to accelerate. However, challenges persist, such as ensuring fair compensation for data sources and addressing biases in models.

Despite these challenges, Covariant remains committed to advancing RFM-1’s capabilities through continuous learning and refinement. The researchers envision a future where the robot can self-train on generated videos, a form of meta-learning that raises questions about potential errors and biases. Nonetheless, with a relentless pursuit of more training data, researchers anticipate this evolution as an inevitable progression in the field of robotics.

Visited 1 times, 1 visit(s) today
Tags: Last modified: March 11, 2024
Close Search Window
Close