How does dubbing work in live-action movies?


Suppose you’re shooting a movie. The actors do their work, and you also record their voices while they’re acting. Additional audio stuff like music and sound effects is added later.

But now suppose you want to dub the movie in another language. You can’t just slap music and stuff onto the project, but you (somehow) need to remove the voices of the original actors and then slap those of the new voice actors onto the film. Except if you cut that out, you’d also have to cut out all environmental noise, etc. And if you do that, you’d basically have to recreate every single sound required.

So how exactly does this work? Are movies shot with and without sound simultaneously? Or is there some technological means to separate the sound from the image?

In: Technology

A lot of movies are dubbed over anyway in their naitive language. They film the scene and then many scenes are redubbed by the same actors for clearer audio. The way the audio track is put together, almost everything is modular, sound effects, music, dialog ect are created in post production and moved around as needed.

Generally, when a movie is being edited, the sound mixer will keep the dialogue separate from the other sound effects so that they can make a version of the movie without dialogue.

The visual and audio streams are recorded simultaneously, but on different channels. A lot of the “ambient” sounds you hear are also added later.

You can also record music in different channels and mix them afterwards (that’s what happens with those “virtual choirs”).

The clapper board is used to align everything to the same starting point for each scene.

MOST of the environmental sound in most movies is in fact added in later by foley artists.

Not always the case for every movie or even every scene, but quite often it’s added later. Even background birds, traffic, etc. You try to isolate the actors. Sometimes you have to get them to dub their lines in the same language if there’s a noisy scene, or there’s something that couldn’t be cut around in editing.

>need to remove the voices of the original actors and then slap those of the new voice actors onto the film. Except if you cut that out, you’d also have to cut out all environmental noise

No, you just cut out the voice track and leave the environmental track intact. Then you put in a new voice track.

Sound in movies are built in layers/channels. So on location, you want to record the actor(s) as isolated as possible. But in most movies, there is plenty of scenes where they need to record the dialogue again in a studio because of noise or unwanted sounds. You more or less have the actors dubbing their own dialogue again.

So, in the studio, you build up the sound with background noise, dialogue, effects and so on and can therefore simply disable the layer/channel the “real” dialogue is in and record it in a different language.

An example is pretty much every club scene you ever watched, they are recorded without music to get the dialogue clear enough and add the music later.

Dialogue is recorded live completely separate from video. It is mixed in later, and the actors often dub it in later if needed. Sound effects, music, background noise are all added in later.

There is a great John Travolta movie from the 70s called Blow Out where he plays a film audio engineer that show a lot about how the process works

Most movies will have the great majority of audio recorded in post-production, meaning the actors will go in the studio to dub themselves afterwards. The audio is recorded in the live scenes scene but used as a ‘guide’ for the actors to dub.

This means that the mixer engineer have control of the levels of everything separately and so usually when exporting the audio back to the client the engineer export a track which contains the whole mix without the dialogue, exactly so that the film can be dubbed in other languages and maintain all the other aspects of the original audio, including the dips in level and etc ..

Listen to the recent “Twenty Thousand Hertz” podcast episodes on Foley Artists. Almost every sound you hear in a major studio movie is added or enhanced in post production.

I hosted a TV show in Germany that had a location shoot, and had global releases with dubbing over my voice in a dozen languages or so. I even overdubbed some of my own dialogue later in a studio.

One day on set I saw the soundman recording whilst everyone else was at lunch.

I asked him about it afterwards and he explained that he was recording the silence – the room tone – of the space we were recording in, because he’d be able to extrapolate information from that to mix overdubs correctly, etc. I have literally no knowledge about what he was doing beyond that very top level explanation, but it’s clear that the sound production even for my relatively simple TV series was more complicated than just taking the feed from my personal mic.

I am sure Hollywood films must do this and then some!

I work in audio post production. In almost all movies/TV shows/Documentaries/etc, pretty much every sound you hear that isn’t dialogue has been added in post. All the atmosphere sounds, footsteps, any time someone puts a cup down or opens a door. Gunshots, explosions. Everything.

Sometimes even the dialogue is replaced for various reasons (audio quality/lines changed/etc), this is recorded in a studio and is called ADR (automated dialogue replacement).

Having all these things separate means that the person doing the final mix has much greater control over all the elements and can create a richer soundscape overall.

As well as a full mix, one of their deliverables is what’s called an M&E stem. This stands for music and effects, and contains a mix of everything except dialogue. This is what is sent to other studios worldwide to be dubbed into other languages.

There are sounds that were recorded in camera, sounds added in post like explosions, and sounds that are mimicking what was recorded on set.

There is a whole process where post production sound crews mimic all the sounds that were filmed on set to be used as a bed for the dubbed dialogue to play on top of.

When dubbing you use all the sounds added in post like music and effects, you add in your new dialogue, and you add in your mimicked sounds.

These mimicked sounds are commonly referred to as “MnE”

When mixing the sound for a movie, the dialogue, music, sound fx, and Foley are all on their own tracks. When a final mix is created, they also export what are known as “stems”.

These stems are the final mix of every individual sound, exported into the aforementioned layers (dia, sfx, Foley, music).

When the movies or shows go to another country, the dialogue track is removed and they have the sfx, music, and Foley. They can then add the voices in the languages of choice.

* I’m a sound utility in Atlanta, and I went to school for sound engineering for film at SCAD

Every piece of sound is on a different audio track: Voices, background, sound effects. So each of those sounds can be turned on and off separately.

Music, Dialog and Effects are three separate sets of audio reels prepared by sound effects editors. There is a re-recording mixer for each of those three classes of audio sources. A music editor, a dialog editor and an effects editor. Original dialog recorded on set by a mixer and a boom operator are part of the dialog tracks. Those tracks are enhanced by looping: rerecording the original dialog for better sound – frequently ambient noise removal. Airplanes, barking dogs, loud traffic, etc. Sound editors and dialog editors are different folks in different buildings. Changing the sound &/or the effects has no effect on the other. They are mixed together, later by the re-recording mixers. That makes the problems you imagine go away.

(Brand new here. I posted this to a commenter not to you. Noob error.)

As people have said, the recordings are multi-track so you can isolate ambient from music from effects from dialog.

If you don’t have multi-track, you can often isolate the vocal track out by being clever. There are tools that work via essentially using noise cancellation to remove that band. You see it used here and there. Kind of like SAP where you can still hear a bit of the original in the background.