Why do data interfaces that are serial (e.g. PCI Express or HDMI) have so many pins on the physical connector? If data is not being sent in parallel, shouldn’t you only need one pin each for send and receive, one for ground, and maybe a fourth for clock?

475 views

Relatedly, why is PCIe x16, say, faster than PCIe x1? What functions are the extra pins performing if data cannot be sent multiple bits at a time?

In: Engineering

3 Answers

Anonymous 0 Comments

I can’t speak to PCI, but HDMI I know.

So the data is serial but there are multiple signals. So for example in HDMI there are 3 wires for each of Red, Green, and Blue. Video is made of the 3 primary additive colors, RGB, so each is separated into its own signal.

Each signal for Red, Green, Blue uses three pins/cabless for the balanced signal being sent. Balanced signal is a positive, negative, and ground. HDMI goes one way, so there is no “receive” going back to the source device. Balanced signals are used to decease interference.

So thats 9 pins for the RGB (and audio since its embedded in the video). Three colors, times three signal cables for each. Then there is another set of 3 wires for timing signals. This is used to keep all three colors in sync. So that is 12 pins for the main video.

The other 7 pins in HDMI are for various things. One is a low speed serial channel back to the source device for various things like EDID, and HDCP. A few pins for the ARC channel. There is a common ground for those channels. One pin is for 5v power, it can be used by adaptors to power a small chip for conversions, like a HDMI to VGA adaptor. One is for Hot Plug Detect. This is a very simple connection between the source and display so they can tell something was plugged in. In later versions 1.4+ a reserve pin and some others were given double duty to do ethernet over HDMI (100mb). That never really took off.

The Wiki page has a pinout list: https://en.wikipedia.org/wiki/HDMI

So thats all the pins for HDMI. Why not just 2 wires, a Tx and a ground? History and the advancement of tech. When DVI or HDMI 1.0 came out, you couldn’t send the full bandwidth of say 1080p video down one single cable. So it was split into multiple. For example, 1080p video needs 3Gbps of bandwidth. In 2000, you couldn’t send 3Gbps down a single cable, you couldn’t generate it and you couldn’t read it (now maybe it was possible technically in 2000, I’m not sure, there is probably a price and ability to make it mass market consideration in there).

Now, the advancement of tech can get 3Gbps down a single coax, like 3G-SDI. That is 1080p over a single wire. You can even do 12G-SGI and do 4k60 down a single RG-6 coax. But as the ability to send higher data rates goes up, so does the use. For example HDMI 2.1 is 48Gbps. Basically the 4 main channels (R,G,B, and clock) all running at 12Gbps (12 x 4 = 48), allowing for 4k120 (see note).

USB-C and Thunderbolt are similar. They are serial data streams but you have multiple lanes of data. So each lane may do 10Gbps and then 4 lanes.

Note: FYI, SDI and HDMI have different encoding schemes so while 1080p video needs 3Gbps on SDI, its more like 4.5Gbps with HDMI because of various overhead.

You are viewing 1 out of 3 answers, click here to view all answers.