• Subscribe

Artificial intelligence gives immortality to TV characters

Speed read
  • On-screen characters can live forever thanks to avatar-creating system.
  • Algorithms train computers to mine audio and video databases to create new content.
  • Proof-of-concept points to a new era in human-computer interaction.

Everyone has a favorite on-screen character. Perhaps you like Super Hans from Peep Show, or maybe Frank Burns from M*A*S*H*, or Creed Bratton from The Office.

But just because your show has run its course, that doesn’t mean you’ll never see them again. Reruns are one thing, but what if there was a way to see the old favorites doing new material? And what if you could interact with that character?

Friends forever. Researchers have demonstrated an avatar can be trained to mine an audio-visual database and create new material. Your favorite characters now live forever. Courtesy James Charles.

That’s the potential offered by the Virtual Immortality project out of the University of Leeds.

A team of scientists led by David Hogg has just shown that by training a computer through a backlog of episodes and transcripts, an avatar can be created that performs all new audio-visual content for a favorite (or not so favorite) on-screen character.

The project arose from a collaboration between Leeds and Oxford that trained a computer to interpret BBC television sign language with the potential of improving human-computer interaction for the hearing impaired.

Spurred by their success, Hogg suggested similar TV broadcasts could also be used to train avatars of TV characters.

Choosing Friends wisely

To demonstrate the concept, the researchers focused on the Joey Tribbiani character from the ubiquitous Friends sitcom. Made famous by Matt LeBlanc, Joey is a beloved dolt, an idiot-savant with a recognizable personality and style of speech – an “ideal test bed for our experiments,” says James Charles, Leeds research fellow and first author of the recently published research.

<strong>Computer training.</strong> Artificially intelligent avatar is trained from a character's transcripts, and speech audio and mouth motion are synthesized in their style and voice. Courtesy James Charles.

“We are interested in TV data because there is a lot of it available for training avatars. On the other hand, obtaining bespoke data for training can be very expensive and time consuming. We have developed the first step toward reducing the cost of such systems and tapping into existing TV show data,” he says.

Friends has a huge backlog (10 years, 236 episodes in all). The Virtual Immortality system watches and listens to all the episodes, identifies on-screen characters, locates the faces and mouths, and processes spoken audio. This would ordinarily require a prohibitive amount of time to code and tag by hand – computation reduced the processing time to 3 hours.

After data collection is complete, new content can be created.  Dynamic visemes, small sections of mouth video, are assembled one phoneme at a time, and then blended onto the existing footage. Phonemes are basic syllabic utterances (small sounds like ‘ch’ or ‘ah’).

"We are trying to see if it is possible to capture in a computational model what it is that makes a person, who they are, how they sound, how they move, how they communicate." ~ James Charles.

Joey is the first television character to be virtually immortalized, and as more of the seasons are used to train the avatar, the scientists expect its rendering, sound, and likeness to improve.

“The synthesis is all automatic, and the exact lip motion of a character is constructed from existing snippets of character mouth motion – this is a very difficult problem,” explains Charles. “The visual and sound element of the avatar is currently only built from the first three seasons of Friends, so we expect using the remaining seasons would further improve performance.”

The scientists intend the current proof-of-concept to highlight the system’s ability to collect data from a video backlog and to demonstrate automatic avatar training.

Futurerama

Charles says his team expects to enhance the avatar’s functionality and make it truly interactive.

“With more refinement in the rendering of sound and video, one could imagine using this avatar as a personal assistant, or creating a more natural form of human-computer interaction. And, with many avatars of different characters, it could potentially be used by production companies to effortlessly create new scenes in TV shows.”

Play it again, Sam. Joey Tribbiani from the long-running sitcom Friends was the test case for the Virtual Immortality project. Computer vision and speech recognition algorithms enable a computer to create new video content. Courtesy James Charles.

But beyond the important entertainment value that will ensure we never run out of episodes of The Love Boat, there is a deeper significance to be attained.

“We are also trying to see if it is possible to capture in a computational model what it is that makes a person, who they are, how they sound, how they move, how they communicate, etc.,” poses Charles. “One could view this as a new form of data capture for people, and we’d no longer be restricted to taking just still images or video of people, and constrained to only play back existing recordings.”

Whether a better personal assistant, a way to interact with historical figures long gone, or a chance to meander endlessly through Middle Earth with an elven avatar as your guide, the Virtual Immortality project has opened one doorway to the future.

“We are living in a world surrounded by computers,” observes Charles. “Any improvements in how we interact with them would certainly be of benefit to us.”

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2018 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.