Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.
[…]
UC Berkeley graduate student Xue Bin “Jason” Peng, along with his colleagues, have combined two techniques—motion-capture technology and deep-reinforcement computer learning—to create something completely new: a system that teaches simulated humanoids how to perform complex physical tasks in a highly realistic manner. Learning from scratch, and with limited human intervention, the digital characters learned how to kick, jump, and flip their way to success. What’s more, they even learned how to interact with objects in their environment, such as barriers placed in their way or objects hurled directly at them.
[…]
The new system, dubbed DeepMimic, works a bit differently. Instead of pushing the simulated character towards a specific end goal, such as walking, DeepMimic uses motion-capture clips to “show” the AI what the end goal is supposed to look like. In experiments, Bin’s team took motion-capture data from more than 25 different physical skills, from running and throwing to jumping and backflips, to “define the desired style and appearance” of the skill, as Peng explained at the Berkeley Artificial Intelligence Research (BAIR) blog.
Results didn’t happen overnight. The virtual characters tripped, stumbled, and fell flat on their faces repeatedly until they finally got the movements right. It took about a month of simulated “practice” for each skill to develop, as the humanoids went through literally millions of trials trying to nail the perfect backflip or flying leg kick. But with each failure came an adjustment that took it closer to the desired goal.
Using this technique, the researchers were able to produce agents who behaved in a highly realistic, natural manner. Impressively, the bots were also able to manage never-before-seen conditions, such as challenging terrain or obstacles. This was an added bonus of the reinforcement learning, and not something the researchers had to work on specifically.
“We present a conceptually simple [reinforcement learning] framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data [i.e. motion capture] recorded from human subjects,” writes Peng. “Given a single demonstration of a skill, such as a spin-kick or a backflip, our character is able to learn a robust policy to imitate the skill in simulation. Our policies produce motions that are nearly indistinguishable from mocap,” adding that “We’re moving toward a virtual stuntman.”
Not to be outdone, the researchers used DeepMimic to create realistic movements from simulated lions, dinosaurs, and mythical beasts. They even created a virtual version of ATLAS, the humanoid robot voted most likely to destroy humanity. This platform could conceivably be used to produce more realistic computer animation, but also for virtual testing of robots.
Source: After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft