What are current best methods for TV-quality lip sync animation?
February 8, 2016 12:15 AM   Subscribe

In 2016, what software and methods do professional animators use for TV-quality lip sync animation with the aim of acceptable quality at the fastest speed?

By "lip sync animation", I mean the matching of lip frames (visemes) to voice over audio. By "TV-quality", I mean the quality found in shows like Family Guy, Rick and Morty, or South Park.

Note this is not a "how do I do this" question. This is a "what does the industry do" question. For example, are studios using automated lip sync solutions? Or for quality reasons, is it all done manually? Maybe an automated first pass followed by manual clean-up like Toon Boom affords? What about motion capture via sensors or camera (e.g. Adobe Character Animator)? I know there are many ways to skin a cat, but what is predominately used by animation professionals?

A little context: I hacked together some lip sync animation software, and I want to compare it to other solutions to see how good it is before investing more time in it.
posted by ErikH2000 to Media & Arts (4 answers total) 1 user marked this as a favorite
 
I haven't had a chance to watch it yet, but 6 Days to Air chronicles making a South Park episode in a week-- maybe they go into the production techniques? South Park is all done in-house, as opposed to Family Guy / Bob's Burgers / Simpsons being piped to South Korean for the bulk of the physical animation, so their methods will be radically different.
posted by bluecore at 6:13 AM on February 8, 2016


Best answer: This youtube video shows how an episode of Family Guy/American Dad is made.

At the point they discuss your question, they mention that the video is sent overseas for the actual animating procedure. This seems fairly common, because this isn't the first time I've heard that. This article mentions how it's like to do the actual animation process in Korea for The Simpsons: specifically mentioning they have 27 different mouths "that can be attached to a stock face figure for talking."

Everything I come across seems anecdotal, but the consensus seems to be that most of this is done manually.

It sounds like you might want to focus on small animation studios that provide animation for advertisements, instead of big production studios.
posted by INFJ at 6:17 AM on February 8, 2016


Best answer: Each studio has its own process, but virtually everything you'll see on TV (minus some of the lowest-budget obscure stuff) will have humans work on the lip-sync. Fully automated solutions simply don't know enough about the performance to do a good job of emoting properly. Lip-sync isn't about moving the lips, it's about creating the correct performance.

The projects I've been involved with might run an automation for an early pass as a way to create a framework for the animators to work with. This might be good enough for backgrounds and crowds, but not for anything else. (And honestly crowds and background characters any more have their own custom "act like an extra" library.) Any automation tools require characters to be rigged in a specific way (again, custom to the studio, their process, and technology). That same setup that makes it possible for the automation tool to work makes it pretty easy for animators to do the same job, only with knowledge of the performance and applying feedback from their director.

Even the highest resolution performance capture is just a starting point for the animators. (And at the moment if you have access to super high res performance capture, you have the resources to throw as many animators as necessary at it.)

Probably the best market for a tool that does automatic lip-sync is for live avatars. Entertainment companies are doing VR experiments along those lines, and as far as I know, all use in-house tools. There are also some businesses who provide virtual assistants/translation services where a live-generated avatar will speak either machine generated or live translated words.

If you're asking if you should you pursue this as a business, I'll say this: The big players in animation have spent years making custom workflows, and the tools to support them. They have full-time people on staff working to solve the same problem that you have hacked together. That said, lip-synch does take a fair bit of time and effort, and small studios might be interested, particularly if it works within their existing (off-the-shelf) workflow. Look to game companies. They produce vastly more animation than non-interactive animators. There are many more players, and they have smaller teams of animators and are more open to automation.
posted by Ookseer at 5:34 PM on February 8, 2016 [2 favorites]


Response by poster: I want to thank everyone who provided answers to my question. I've talked to a few other animation professionals, and at least one thing seems to be consistent--there is no automatic lip sync solution that is the norm for "TV quality" animation, unless you count motion capture. And then mo-cap would be used only if the production was intending to capture more than the lip movements, and even require manual clean-up as well.
posted by ErikH2000 at 2:25 PM on February 11, 2016


« Older Does Alta Dena sell unpasteurized sour cream?   |   Make ahead/freezer breakfast and lunches Newer »
This thread is closed to new comments.