why does playing 3 different TV’s/media devices, all synced playing the same thing, sound like 3 “layers” of sound instead combining into one sound?


Is it because they aren’t TRULY synchronized? Because the different quality of speakers don’t make perfect sounds waves that can line up/combine/harmonize?

Are there any instances these sounds WOULD combine, like with some kind of program or piece of hardware? How is this different than something like a concert where you can’t perceive the individual speakers?

(I have a decent understanding rudimentary physics/sound and a good understanding hearing/the vestibular system/cranial nerves btw! Im not actually 5 haha)

In: 2

Short answer: yes, they aren’t truly synchronized.

Long answer: acoustics are *very* complicated, the EQ from each device could be slightly different. The way sound coming from each of them bounces around the walls could delay them more or less than the others, the pitch of each of them could be slightly different, and they probably didn’t start at the *exact* same time. All of these factors lead to them not really sounding perfectly unison.

Difficult to say without knowing more about your setup, but I suspect that the three devices are linked through a digital connection that is not intended for real-time applications, so it’s not really in sync. Starting at 10ms delay, sounds start to get “mushy”.

It only takes 10 milliseconds of latency to be audible, and sound travels 1 meter in 3 milliseconds. So devices 3-4 meters apart playing synchronized sound will have audible latency. Also, the audio processing from modern compressed audio formats may take different amounts of time on different devices.

In a stadium with multiple speaker systems, the sound engineers use delay equipment to keep all the sound synchronized across the stadium. There are some multi-room music players that can synchronize audio by listening for the other speakers, but not all systems have this. I think there may be some audio apps for phones that can do this as well.

In a perfect world the sound is recorded in the same acoustic environment it will be played in. Aka, one microphone to record for each speaker to play.

even if you have 3 different speakers perfectly engineered and tuned to produce the EXACT same sound and fed by the same source, you’ll still get a small delay from the furthest one vs the closest one purely due to distance between each speaker and you. add in manufacturing tolerances, different processes between manufacturers and design of products…

yes, there is hardware (and software) that can account for, and correct, this problem. it ultimately comes down to signal timing. tuning of delays can eliminate grotesque examples. a sound engineer would be needed to explain much beyond that, which i am not.

as for concert venues, the issue is both far easier, and far harder, depending on the act. for traditional bands (country, rock, metal etc) you’ll often have a single speaker (or set of speakers) designated to a specific instrument or singer. some sound mixing may be employed to balance the overall sound within the venue and/or to the tastes of the musicians/promoters themselves.

for less traditional or non-musical acts (edm, djs, comedians etc) it’s a matter of matching the sound profile to the person/group. for a comedian, you may need single channel (mono) audio across all channels. for edm, you may need each individual speaker to act on its own, even if they are usually clustered into left, right and center groups.

in the case of a comedian with mono audio, 1 speaker being out of sync can be jarring for the performer, let alone for the audience. that’s why you’ll rarely see miss-matched speakers in auditoriums. you’re just asking for trouble by having multiple model numbers, let alone different product lines or worse, manufacturers.