eli5: How is C still the fastest mainstream language?

730 views

I’ve heard that lots of languages come close, but how has a faster language not been created for over 50 years?

Excluding assembly.

In: 2072

19 Answers

Anonymous 0 Comments

Imagine I ask “How long is the fourth Lord of the Rings film?” Most modern languages would just refuse to answer and say “there is no fourth LotR film”, whereas a C program would blindly try to look up the fourth LotR film, find Lost in Translation instead, and happily answer “102 minutes”.

C will, by and large, assume you know what you’re doing and do exactly what you tell it to, and it’ll assume that any weird corner case you haven’t bothered checking for can’t happen. Having this level of control is great for performance, but _having to take_ that level of control is terrible for safety and security.

Inversely, most modern languages have a bunch of automatic checks in place for a lot of those corner cases (like the “trying to read past the end of a list” problem I alluded to with the DVDs). Fundamentally, there’s no free lunch here, and performing those checks means your program is doing more work. Doing more work is always slower than not doing it, so a language that forces those checks can never be as fast as a language that doesn’t.

Because those modern languages were created with the benefit of hindsight, we know that safety and security matter, and that those checks are quite literally the difference between a program crashing when it detects a problem, or letting a hacker read all your data because it missed the problem. We know that programmers aren’t superheroes, and we make mistakes, and we overlook some of those weird exceptional cases, and we all around need as much help as we can get if we’re to write complex software with as few nasty bugs as possible. So modern languages are _deliberately_ slower than C, because we’ve collectively agreed that the benefit justifies the cost.

Also, it’s easy to forget that C is incredibly fast precisely _because_ it’s been around for decades. Mainstream C compilers are incredibly mature, and there’s centuries of work-hours of research and development poured into making those compilers generate better code. Newer languages that don’t build on that work just have a lot of catching up to do.

Anonymous 0 Comments

You can’t get faster than assembly, because that’s what the CPU interprets natively.

C is basically “portable assembly”. Most of the time you can get probably 95%+ of the performance of writing things in architecture-specific assembly. And often anything large is going to end up better than humans write, because it’s very hard to write optimal assembly. So there isn’t (usually) a lot of room to improve performance. And you can easily embed ASM code right into a C program when you need to.

You could probably improve on C in various ways, but it has a HUGE amount of inertia. There are huge projects like the Linux kernel and tons of embedded systems code written in C, lots of available tooling (basically every platform ever has a C compiler), and almost every OS providing a C API to their services. And almost every programming language has a way of interfacing with C libraries, because so many things are standardized on that for interoperability. And C itself has gotten a bunch of improvements over the last 40 years. So you’d have to create something that is *so much better* than C that you’d convince everyone (or at least a large chunk of the people currently using C) to abandon a ubiquitous standard that they know works for your unproven new thing. Nobody has managed to do that. Rust is the latest contender and may actually start cutting into the systems programming niche. But we’ll see.

Anonymous 0 Comments

A big advantage is that the Compilers are ridiculously mature. If there’s an automatic optimization that can be done, the compiler can probably do it. That does a lot to make the language faster.

Anonymous 0 Comments

Maybe this will help you understand the relationship between low-level languages (like C) and higher-level languages.

Think of assembly as working with words. These words are the basic units the processor knows how to execute. Simple actions. These are usually verbs, like: read, store, increment, shift, jump, etc.

Think of low-level languages (like C) as working in sentences. Small, basic concepts. Things like: read memory from here and store it there, read a value and increment it, promote this 8-bit value to a 32-bit value, etc.

Think of high-level languages (like Java) as working in paragraphs. Larger concepts can be written without writing each individual sentence. Things like: Iterate over a collection of items performing an action on each one, run a block of code in a different thread of execution, slice a two-dimensional array to retrieve a particular column, blit a memory buffer to a display buffer, etc.

At some level, all languages are translated to the equivalent machine code. Paragraphs are broken up into individual sentences and sentences are broken down into words. The farther away from words you start, the more likely that the translation efficiency suffers. This leads to more words to say the same thing.

C is efficient because it’s almost as close to programming in words as possible (in this analogy). Translation of it’s sentences to words is straightforward and efficient.

It’s not an easy task to improve it’s performance because it’s almost at the limit of writing as close to words as we can while still maintaining an environment where programs can be written in a more user-friendly way.

Anonymous 0 Comments

I think you may have answered your own question without realizing it.

Assembly runs on bare metal, directly driving machine code. The obvious issue is that it requires knowing every function of the processor’s architecture instruction set, and it’s really not portable to other architectures.

There’s a lot of history that’s important to understanding the next bit here, and I’ll try to make it easy.

1970s. Dennis Ritchie was trying to build an OS for the 18-bit PDP-7 processor, doing so in assembly. However, the 16-bit PDP-11 soon came out and because of the speed increase, it was worth mostly restarting (as there really wasn’t a way to switch between the 18-bit and 16-bit). This would set the stage.

Ritchie switched to the PDP-11, and partway through development, realized that coding directly in assembly would tie the OS to this one processor. Recognizing that it would mean making a new OS for every processor, and that hardware was speeding up, he pivoted to making a programming language that would run everything through a compiler for the assembly language syntax for the PDP-11. He then built the OS, named UNIX, in this language, named C.

Because C doesn’t do much. It mostly condenses common machine code instructions into simpler strings (imagine if asking to display a single character in 28 pixel height Arial font meant manually directing the display pixel by pixel, with specified subpixel brightness vale for each one, rather than just telling the language to refer to a library for the pixel mapping and font vectors).

But then there were other processors. And he wanted UNIX to work on them. So the answer was to make compilers for different processors that were compatible with the programs coded in C.

This way you could make source code in C, and as long as you had a compiler for C (which was light to make as it was built on a very simple PDP-11), your source code would run.

Now here’s what matters.

The PDP-11 was CHEAP. Only about $20,000 at the time, which was $50,000 less than the PDP-7. While it wasn’t preferred for mainframes or similar, it was cheap enough to get into the hands of researchers, academics, and smaller companies entering the computational space. Hundreds of thousands of units sold, and the instruction set became so well-understood among the field that companies like Intel and Motorola built their architecture on the same instructions. The 16-bit 8086 microprocessor from Intel, installed in the original IBM PC (which was the first real personal computer), and the 32-bit Motorola 68000 (Sega Genesis, Mac 128K, Commodore Amiga) both were built up with instruction sets that were really just that from the PDP-11. It also meant compiling for C was nearly plug-and-play: even if those newer processors had a lot more instructions available, C programs would still work because they’d address the original PDP-11 instructions.

This led to more programming in C, because those programs were inherently portable. And if new instructions were found on new processors that completed a function more efficiently than the PDP function, it was easy enough to just re-mapp the C compiler for that processor to use that instruction.

68000 processors carried forward, and the 8086 gave us the endlessly undying x86 architecture. A C compiler continues to work on both.

The important bit is the x86 architecture. The IMB PC was a godsend. It standardized home computing as something reasonable for any small business. Operating systems sprung up. UNIX spawned a BUNCH of children, most importantly Linux, its million derivatives, Windows, later versions of Mac OS, came along, all built in C.

And that’s sort of where the story gets stuck. There’s no drive to walk away from C. It’s so well-adopted that it’s driven processor development for decades. The processors are built based on how they can best interface with C. **It’s impossible to do better than that.**

ARM and other reduced instruction set platforms can’t get away from it, because portability matters. So you can compile for C, so you can stuff Java on the RISC chip. As such, RISC architectures are going to continually be compatible with the most basic C implementation; they’re essentially just modified PDP-11 architecture stuffed onto faster hardware at this point.

So while COBOL and ARGOL and other languages are similarly efficient, the architecture they best run on isn’t popular enough to make the languages popular.

Anonymous 0 Comments

I would argue the premise of your question is fundamentally flawed.

Assembly isn’t inherently fast. It’s lower level. Something fast can be written in it by a skilled engineer, but most engineers will probably write something that performs worse than if he they written it in another language. Their code will probably contain bugs too.

Language compilers are extremely smart and one of the functions of them is to optimize code. And the reality is the vast majority of people can’t outsmart the compiler when it comes to trying to optimize by hand, it’s not going to be time well spent.

In the real world, programmers should worry about algorithms. For example, if you need to computer something based on an input of “n” items, writing code that can do it in n^2 amount of effort rather than n^3 amount of effort is probably time better spent than writing in a lower level language.

The reason to choose a language these days for most people ought to be driven but purpose and need, rather than worrying what’s the absolute fastest.

There are definitely applications where you need to write in C or assembly, but these days those are far and few.

I say this as someone who has written compilers.

Anonymous 0 Comments

C isn’t necessarily the fastest anymore. There’s a bit of contention for that title right now with the much younger Rust language.

As to why C is so fast compared to most others, when a program is written in C there’s a bunch of computation that’s handled up front once referred to as “compilation” (translation of human readable code to computer readable binary, not all languagesfo this and it’s one of the major differencesbetween slow languagesand fast ones) and the compiler (program that does the translating) for C is very smart and looks for ways to optimize your code while it’s compiling.

Anonymous 0 Comments

This is a lovely question and I created my account just to answer that.

C is a bare-bones language, putting the 50 years of compiler optimisateons aside, it lets you do everything you want, as you want. Do you want to substract 69 from letter “z” and multiply it with 5 and see which character do you end up with? Be my fucking guest. It fully “trusts” in the developer, it doesn’t check for anything. Doesn’t tell you that “you fucked up something”, it doesn’t say that “oh there is a type missmatch in line 479” or “you are trying to reach array element 498,371 in a 3 element array”. It basically converts your human readible code into machine code, and then executes it. If you fuck up, you fuck up. It doesn’t care if you are going to fuck up, it doesn’t care if you are fucking up, it doesn’t care if you did fuck up. It has one and only one goal; “do as the programmer says”. Which could be the greatest memory leak in the human history, but it does not care.

For other programming languages, you have tons of security schemes, keywords, accessability features etc., which makes the programming language “safer” and “easier to code” but suffers from performance.

Think of it like this; you want to book three rooms in a hotel (memory allocation), in most of the modern languages, you get exactly three rooms. If you want extra rooms, you can get them, if you stop using rooms, they will clean them and empty them, if you want to chect some of the rooms you did not book, they will stop you from doing so. But if you use C, you can do anything you’d like, do you want to access room no 5 without booking it? be my guest. do you want to change who owns room no 19? be my guest. do you want to create more rooms by digging a hole into the ground? be my guest. do you want to seduce the hotel manager and make him transfer the ownership of the hotel to yourself? be my guest. do you want to blow the hotel up? be my fucking guest.

On top of that, most C compilers are optimised for 50 years, meaning that even if your code is not optimised, they will optimise it. For example, if you are trying to sum some values (lets say sum i=1..N) in a loop, compiler will detect this and will replace this code with N*(N+1)/2 formula, which reduces complexity from N to 1.

Optimising your code doesn’t mean that you can’t do what you want, how you want tho. You can turn off all optimisations while compiling.

Anonymous 0 Comments

There’s a lot of comments on here already, but I really think most of them have missed several key points… Most of these answers definitely are not written by C programmers or hardware engineers. I am both, thankfully, so let’s get started:

I saw one comment touch on this already, so I’ll be brief: Assembly is not *necessarily* fast. It is just a list of steps for the CPU to execute. These are called “instructions”, and modern CPUs have hundreds of instructions to choose from. They can do simple things like “add”, “divide”, “load”, etc. They can even do advanced things, like “encrypt”, or “multiple 8 numbers together at the same time then add them all to one value”.

Not all instructions are created equal. Some instructions can be executed multiple times in a single “timestep”, called a *cycle* – as in, a processor may be able to execute 4 ADD instructions simultaneously. Whereas other instructions, like DIVIDE, may take several cycles to complete.

Thus, speed of a program is dependent on the kind of instructions you execute. 10,000 ADD instructions would be a lot faster to complete than 10,000 DIVIDE instructions.

What an instruction means in the context of surrounding instructions also has an impact too. If one instruction depends on the answer of a previous one, the processor cannot execute it simultaneously (*), as it has to wait for the answer to be ready before it can do the next one. So, adding 10,000 distinct number pairs for 10,000 answers is faster than summing every number from 1 to 10,000 for a single answer.

This is only scratching the surface of how you can write assembly that runs fast. A skilled assembly programmer has deep knowledge of the interior design of the CPU and its available instructions, how they correlate to each other, and how long they take to execute.

I hope this makes it clear that assembly is not fast, it’s how you write it. This should be immediately clear if you realize that everything you run *eventually* runs assembly instructions. If assembly was always fast, it wouldn’t be possible to have a slow program written in Python.

Intro done, now let’s get to C. What do C and other higher level programming languages have to do with assembly?

Programming languages can broadly be separated into two categories – they either compile directly to “machine code” (assembly), or they don’t. Languages like C, C++, Fortran, Rust, and others are part of the first camp. Java, Python, Javascript, C#, and more are part of the second camp.

There is absolutely nothing that requires C to compile down to *good assembly*. But there are many things that encourage it:

1. There is no automatic safety checking for many things. Note that checking something takes assembly instructions, and not doing something is always faster than doing it.
2. There are no things “running in the background” when you write C. Many languages feature these systems built-in to make the programmer’s life easier. In C, you can still have those systems, but they don’t exist unless you write them. If you were to write those same systems, you would end up at a comparable speed to those other languages.
3. C is statically typed, so compilers know exactly what is going on at all times before the program ever runs. This helps the optimizer perform deductions that significantly improve the generated assembly.

The last point is particularly important. Languages in the C camp would be nothing without a **powerful optimizer** that analyze the high-level human readable code and turns it into super fast assembly. Without it, languages like Java and Javascript can regularly beat C/C++/Rust due to their runtime optimizers.

In fact, optimizers in general are so powerful that Fortran/C++/Rust can very often be faster than C because of the concepts those languages let you express. These languages let you more-directly write things like a sorting function or a set operation, for example. The optimizer thus knows exactly what you’re doing. Without these higher level concepts, the optimizer has to guess what you’re doing in C based on common patterns.

This also applies to Java and Javascript. They have very powerful runtime optimizers that actually analyze what is happening as the code runs, and thus can make even better logical deductions than what could be attained statically. In rare cases, this can even result in code that is faster than an optimized but generic C equivalent. However, this is only evident on smaller scales. Whole programs in these languages are typically significantly slower due to a combination of the 3 points above.

**C is not fast. Optimizers make it fast. And **

PS: C shares the same optimizer with other languages like C++, Rust, and a few others (this is called LLVM). So equivalent programs written in these languages are usually the exact same speed, give or take a few % due to a combination of the 3 points above.

(*) Processors can actually execute dependent instructions somewhat simultaneously. This is done by splitting an instruction into multiple distinct parts, and only executing the non-dependent sub-tasks simultaneously. This is called “pipelining”.

TLDR: C is not fast. Optimizers make it fast, and optimizers exist in multiple languages, so the question and many other answers start off with wrong premises.