How do animators (CG and hand drawn) match the characters’ moths to the voice (or is it vice versa)?

280 views

*meant to say mouths…not moth…

In: 7

4 Answers

Anonymous 0 Comments

Frankly, it’s just really slow and really hard.

They listen to the audio of the voice actor, then match up the basic shapes of lip, teeth, and tongue to the individual sounds (linguists call them “phonemes”) in time with the animation and audio.

This is easier in 2D than 3D, because a few simple shapes can represent most of the necessary sounds. While they used to animate mouths along with the rest of the face, and some still do for particularly expressive scenes, most anime and cartoons use something called “mouth flaps” to just match the sound to the shape and swap it in as necessary. If you have a more complex mouth shape, like, say, the animals in Lion King, this becomes harder or impossible, and you gotta do it all frame-by-frame.

3D animation is typically much more fluid, even for scenes where it’s just people, they need to animate every frame so it looks smooth and natural. Sometimes they can get a little help with motion capture tech, especially high-budget stuff like Avatar, where they’ll just map the 3D character’s face over the actor’s face automatically in the computer.

Often times both 2D and 3D animators will watch recorded video of the voice acting session, in order to get a more wholistic feel for how the actor is emoting, both in voice and face/body. You can see this a lot in old Disney movies, where the making of section will show the actor and their animated character side by side, with the expressions and vocal mannerisms captured with incredible skill.

You are viewing 1 out of 4 answers, click here to view all answers.