Google DeepMind unites researchers in bid to create an ImageNet of robot actions

Google DeepMind unites researchers in bid to create an ImageNet of robot actions

Of all the holy grails in robotics, learning may well be the holiest. In an era when the term “general purpose” is tossed around with great abandon, however, it can be difficult for non-roboticists to understand what today’s systems can — and can’t — do. The truth of it is that most robots these days are built to do one (or a couple, if you’re lucky) thing really well.

It’s a truth that spans the industry, from the lowliest robot vacuum to the most advanced industrial system. So, how do we make the transition from single to general purpose robotics? Certainly, there are going to be a lot of stops in multipurpose land along the way.

The answer is, of course, robot learning. Walk into nearly any robotics research lab these days and you will find teams working on tackling the issue. The same applies to startups and corporations, as well. Look at companies Viam and Intrinsic, which are working to lower the bar of entry for robot programming.

Solutions run a fairly wide gamut at the moment, but it has become increasingly clear to me that this is an issue that won’t be taken down with a single silver bullet. Rather, building more complex and capable systems will almost certainly involve a combination of solutions. Central to most of these, however, is a need for a large, shared dataset.

Google’s DeepMind robotics team this week announced the work it has done with 33 research institutes designed to create a massive, shared database called Open X-Embodiment. The researchers behind the project liken it to ImageNet, a database of more than 14 million images that dates back to 2009.

“Just as ImageNet propelled computer vision research, we believe Open X-Embodiment can do the same to advance robotics,” note DeepMind researchers, Quan Vuong and Pannag Sanketi. “Building a dataset of diverse robot demonstrations is the key step to training a generalist model that can control many different types of robots, follow diverse instructions, perform basic reasoning about complex tasks and generalize effectively.”

They add that such a task is far too large to entrust to a single lab. The database features more than 500 skills and 150,000 tasks pulled from 22 different robot types. As the “Open” bit of the name implies, its creators are making the data available to the research community.

“We hope that open sourcing the data and providing safe but limited models will reduce barriers and accelerate research,” the team adds. “The future of robotics relies on enabling robots to learn from each other, and most importantly, allowing researchers to learn from one another.”

Source link

Samsung is getting into food. Previous post Samsung shows off better AI, security and sustainability for products at SDC 2023
ETH staking Next post Coinbase Ranks As Second Largest ETH Staking Entity As Lido’s Dominance Raises Concerns