Researchers have developed an algorithm that gives robots the ability to teach themselves physical tasks, and practices in a few days—the equivalent of a hundred years inside a computer simulation.
MIT Technology Review introduced brand-new technology, an AI-driven robot hand, dubbed Dactyl, capable of executing physical tasks. The technology comes from a UK company called Shadow combined with a robotic system developed by OpenAI.
A machine-learning technique known as reinforcement learning helped accomplish the physical tasks. Reinforcement learning originated from the way animals learn through positive or negative feedback. OpenAI approached this measure by introducing random variations in the learning environment, which speeds up the programming and training process. It is nowhere near as agile as a human at the moment, but increasing the processing power and introducing more randomization might improve its capabilities in the future.
For now, a measure to enhance its ability is being worked out. But in the near future, when machines have the agility of humans, many fields will see huge advancements. We will need a strategy where humans can still benefit, even as more jobs are replaced by machines. That is what organizations like MDI are working on—indeed, MDI, an offshoot of BGF, is currently developing standards that will help to ensure AI benefits humanity.