How do animators (CG and hand drawn) match the characters’ moths to the voice (or is it vice versa)?

276 views

*meant to say mouths…not moth…

In: 7

4 Answers

Anonymous 0 Comments

Frankly, it’s just really slow and really hard.

They listen to the audio of the voice actor, then match up the basic shapes of lip, teeth, and tongue to the individual sounds (linguists call them “phonemes”) in time with the animation and audio.

This is easier in 2D than 3D, because a few simple shapes can represent most of the necessary sounds. While they used to animate mouths along with the rest of the face, and some still do for particularly expressive scenes, most anime and cartoons use something called “mouth flaps” to just match the sound to the shape and swap it in as necessary. If you have a more complex mouth shape, like, say, the animals in Lion King, this becomes harder or impossible, and you gotta do it all frame-by-frame.

3D animation is typically much more fluid, even for scenes where it’s just people, they need to animate every frame so it looks smooth and natural. Sometimes they can get a little help with motion capture tech, especially high-budget stuff like Avatar, where they’ll just map the 3D character’s face over the actor’s face automatically in the computer.

Often times both 2D and 3D animators will watch recorded video of the voice acting session, in order to get a more wholistic feel for how the actor is emoting, both in voice and face/body. You can see this a lot in old Disney movies, where the making of section will show the actor and their animated character side by side, with the expressions and vocal mannerisms captured with incredible skill.

Anonymous 0 Comments

They do it both ways depending on the animation.

Generally animations will start with a voice recording, and then the animators will match it, either with automatic software or based on experience; knowing that certain sounds come from certain mouth shapes.

In some cases (particularly with dubbing) the animation comes first, in which case the voice actors have to be careful to line up their recording with the animation, and there is a whole art to translating dialogue for dubbed animation to make the new lines not just translate but also match the mouth movements.

Often animation projects will end up doing a bit of both, using “scratch” voice lines. In order to help the animators they’ll get someone to record the voice lines first (also helps with sound effects, music etc.), and then once the animation is largely finished it will go to the main voice actor to record the final version. They’ll often try to find a voice actor to do the uncredited “scratch” who sounds like their main voice actor, but who is cheaper and can be brought onto the project much earlier.

This is particularly common with big animation films where the production wants a famous actor to voice a role, but wants an experienced (and cheaper) voice actor to do the “scratch,” although sometimes a voice actor can do their own scratch, and occasionally a scratch voice will end up staying for the final version (apparently this happened with the 2022 Pixar film Turning Red, where Rosalie Chiang was hired to do scratch/temporary voice lines for the main role and was good enough that they kept her in the final film).

Anonymous 0 Comments

We literally listen to the recordings of the actors and slowly animate the lipsync. It gets a lot easier once you’ve done it a lot, but each animator has their own way of doing it. Some do the open/mouth shapes first (like a puppet) then add details for each sound. Some (like me) will do it “straight ahead” and listen for sounds and match shapes to it. That’s why every animator has a mirror on their desk, to look at their own faces when they say dialogue or make expressions.
You’re not really animating each letter or word, but rather the sound they make together. So let’s say a character’s line is “Can I come over?” You really only need a few shapes from that (closed C, open A, closed A, N that opens into the I, straight into the M, a slight E open shape, V and R).
Hope that makes sense

Anonymous 0 Comments

All the responses in this thread may have been true at one time or in smaller studios, but nowadays most animators use specialized software that is designed to sync lip movements and facial animations to audio files. Some of the more well-known programs include JALI, AccuLips, Faceright, Crazy Talk, Faceshift, etc. etc. etc. Note that some of these programs offer software libraries that can be embedded into video games as well.