MIT presents AI frameworks that compress models and encourage agents to explore

May 4, 2020News

In a pair of papers accepted to the International Conference on Learning Representations (ICLR) 2020, MIT researchers investigated new ways to motivate software agents to explore their environment and pruning algorithms to make AI apps run faster. Taken together, the twin approaches could foster the development of autonomous industrial, commercial, and home machines that require less computation but are simultaneously more capable than products currently in the wild. (Think an inventory-checking robot built atop a Raspberry Pi that swiftly learns to navigate grocery store isles, for instance.)

One team created a meta-learning algorithm that generated 52,000 exploration algorithms, or algorithms that drive agents to widely explore their surroundings. Two they identified were entirely new and resulted in exploration that improved learning in a range of simulated tasks — from landing a moon rover and raising a robotic arm to moving an ant-like robot.

In the second of the two studies, an MIT team describes a framework that reliably compresses models so that they’re able to run on resource-constrained devices. While the researchers admit that they don’t understand why it works as well as it does, they claim it’s easier and faster to implement than other compression methods, including those that are considered state of the art.

The original article can be found here.

To support AI application in the world society, Artificial Intelligence World Society Innovation Network (AIWS.net) created AIWS Young Leaders program including some MIT Researchers, as well as Young Leaders and Experts from Australia, Austria, Belgium, Britain, Canada, Denmark, Estonia, France, Finland, Germany, Greece, India, Italy, Japan, Latvia, Netherlands, New Zealand, Norway, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United States, and Vietnam.