• Subscribe

Algorithms go to Hollywood

Speed read
  • Artificial intelligence learns to mimic human motion.
  • XSEDE supercomputer simulations reduce AI learning time from years to hours.
  • Film, dance, and gaming among the many applications for the new motion technology.

For its upcoming blockbuster, Super-Duper Films cast Hunk E. Dory in the lead role. After months of filming — in exotic locales, green screens, and the mean streets of Los Angeles — they realize a few action scenes are missing. But because Mr. Dory is busy filming other projects, producers are worried they will blow their budget reshooting the scenes. Whatever will they do?

This hypothetical scenario is a very real challenge for filmmakers today. Once again, technology is primed to fill in the gaps. Philippe Pasquier, a professor in artificial intelligence and a researcher at Simon Fraser University, is merging art and science to create systems that can understand and produce human-quality movement (and no need to get Mr. Dory back in the studio).

Walk like a man. A demo of Walknet: An interactive tool for generating expressive motion. Courtesy Philippe Pasquier.

“Today, the main use of computers is for creative tasks where there are no optimal solutions,” Pasquier says. “With our project, we’re building systems that can analyze and learn how humans move and generate new movement.”

Pasquier's project, Movement Style Machine, uses deep learning techniques and relies on collaborating with artists to create new algorithms. After recording movement from actors and dancers in a motion capture studio, Pasquier feeds this information into an algorithm, which then builds a model based on how a person moves. From there, Pasquier can ask the algorithm for movement that has not been recorded.

To run their algorithms, Pasquier’s team uses the Texas Advanced Computing Center's (TACC) Stampede supercomputer, one of the 10 most powerful computers in the world.

“On a single computer, running our algorithms would take years; on medium-sized resources months — but, using resources from the Extreme Science and Discovery Engineering Environment (XSEDE), we can train some of most complex models within 24 hours,” Pasquier says.

To validate the success of their algorithms, Pasquier runs experiments where people watch human-generated movement and algorithm-generated movement. If they can't tell the difference between the two, it means their system is human-competitive.

According to Pasquier, there are numerous applications for such human-competitive systems. One is in the dance realm, where dancers have an individualized style of movement.

“We teach our system a dancer’s style and the system can show the movements that they do not do,” Pasquier said. “Sometimes it’s good for a dancer to be reminded of types of movement they don't use so they can expand their movement vocabulary.”

Dancers can create new works through Pasquier's platform iDanceForm. The software helps choreographers create movement when there are space and financial limitations.

The researchers are also training systems to detect mood from movement. They worked with people to see if they could interpret emotions in movement and wanted to see if machines could do the same. Pasquier found that their systems were able to both generate movement reflecting different emotions and detect intended emotions.

Metaview of a metaview. An overview of the projects going in Pasquier's' Metacreation Lab. Courtesy Philippe Pasquier.

Generating new movement based on old patterns and expression cues are also applicable to the film and gaming industries. For the case of Mr. Dory, there would be no need to bring him back in the studio. Using this technology, Super-Duper Films could create scenes with several characters interacting without the actors present.

“As far as we know, neither the film industry nor the game industry use this type of technology yet. These are fresh results from our Metacreation Lab,” Pasquier says. “There are other approaches for computer-assisted animation, but ours is new and more powerful and versatile than the existing ones.”

Read the original version of this article here.


Movement Style Machine is supported by funding from the Social Sciences and Humanities Research Council (SSHRC) and moving stories, a collaborative research partnership to develop digital tools for movement.


XSEDE16 will be held in Miami from July 17-21.

This year's theme is Diversity, Big Data, & Science at Scale: Enabling the next generation of Science and Technology.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2017 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.