What makes different programming languages “better” than others? Or more powerful? Why have different languages developed over time? Are they all based on the same thing?

1.17K views

What makes different programming languages “better” than others? Or more powerful? Why have different languages developed over time? Are they all based on the same thing?

In: 187

78 Answers

1 2 6 7 8
Anonymous 0 Comments

It depends on what you mean by “powerful.”

If you mean “Can some languages do things others can’t?” then it may help to understand a bit about how a program works and what a programming language is.

For a computer to do anything, certain circuits have to have voltages set in exactly the right way, which causes other voltages to be set in other ways. Here’s a mechanical one you can see operating with levers:

In your computer, it’s done with electrons, so it’s much faster (and also a lot smaller).

In the early days of computing, you had to flip switches to get the initial voltages set up the right way; you can watch the first minute or so of this: https://www.youtube.com/watch?v=Sr9mmsLQmYs

Eventually there were punched cards, which worked because electricity flows where there’s a hole and doesn’t flow where there’s no hole, and again you’re setting voltages in hardware.

The problem is that writing a program that looks like a page of 10010101010010010100101001010100101 is hard to read,
so “assembly language” was created. It looks like this: “add $t2, $t0, $t1”. So you can put values in the various registers and add the result to another register. But it’s not good for long or complicated programs.

To make this more readable, compilers were created for more readable programs, and now you can write “a = b + c” and hand it to the compiler, which will turn it into “add $t2, $t0, $t1” and then give it to the assembler to turn into “10010101010010010100101001010100101” and then it goes on the hardware and runs.

So, whatever language you write in, in the end it’s a bunch of voltages in circuits that actually runs the program. (Some languages are not compiled, they are run in a program called an “interpreter” that executes them directly, but the interpreter is itself compiled.) All modern general-purpose programming languages are equally powerful in terms of “what they can do.”

However, different languages have different strengths and weaknesses. For example, in C, if you have a space to hold someone’s name, you have to give it a size, writing something such as “char lastname[10]” because you have room for ten characters. BUT: (1) in C, a string of characters always ends with a NULL, so really that’s only room for 9 letters, and (2) it won’t ever get bigger. So if someone named “Fitzpatrick” comes along, it’ll get chopped off to “Fitzpatri” which isn’t good. In a language like Python, you can just have a space called “lastname” and it’ll hold however many characters somebody puts in, automatically growing as needed. And you might be thinking “Well, Python is clearly better!”, but in the actual hardware of the computer, it can’t make space get bigger like that. Python has to do a lot of stuff behind the scenes: it will make a space for a name, and if it doesn’t fit, it has to make a new space that’s bigger, and then move everything from the old space to the new space, and that takes time. So while Python is easier to write programs in, programs written in C often run faster.

And there are other things, too: in many languages, you have an “integer” type, which you can use to hold counting numbers. But you can make a programming mistake, such as “day = get_month”, and accidentally put the value for the month in the space for the day. In a language like Ada, you can say “day holds an integer of the day_type, and get_month returns an integer of the month_type,” and then if you make that programming mistake the compiler will stop and return an error message: you just stored a month where it expects a day. That makes writing the program even harder, because you have to declare different kinds of integers, but it also means that when you’re done, there are likely to be way fewer bugs, because the compiler can do extra checking. If you’re in a situation where “if this program doesn’t work right, people will die,” then you’re likely to want a language like Ada, which the extra effort is justified because people might die.

If by “more powerful” you mean “I can write more programs to do something in less time,” then a language like Python is obviously better than assembly language. But using a language like Python means you pay a bit of a speed penalty as compared to C, which maybe isn’t a problem if you have to do 100 things but what if you have to do a trillion things? And using a language like Ada will mean you write your program slower, but if people are going to die for mistakes you *should* be as careful and methodical as possible.

Anonymous 0 Comments

It depends on what you mean by “powerful.”

If you mean “Can some languages do things others can’t?” then it may help to understand a bit about how a program works and what a programming language is.

For a computer to do anything, certain circuits have to have voltages set in exactly the right way, which causes other voltages to be set in other ways. Here’s a mechanical one you can see operating with levers:

In your computer, it’s done with electrons, so it’s much faster (and also a lot smaller).

In the early days of computing, you had to flip switches to get the initial voltages set up the right way; you can watch the first minute or so of this: https://www.youtube.com/watch?v=Sr9mmsLQmYs

Eventually there were punched cards, which worked because electricity flows where there’s a hole and doesn’t flow where there’s no hole, and again you’re setting voltages in hardware.

The problem is that writing a program that looks like a page of 10010101010010010100101001010100101 is hard to read,
so “assembly language” was created. It looks like this: “add $t2, $t0, $t1”. So you can put values in the various registers and add the result to another register. But it’s not good for long or complicated programs.

To make this more readable, compilers were created for more readable programs, and now you can write “a = b + c” and hand it to the compiler, which will turn it into “add $t2, $t0, $t1” and then give it to the assembler to turn into “10010101010010010100101001010100101” and then it goes on the hardware and runs.

So, whatever language you write in, in the end it’s a bunch of voltages in circuits that actually runs the program. (Some languages are not compiled, they are run in a program called an “interpreter” that executes them directly, but the interpreter is itself compiled.) All modern general-purpose programming languages are equally powerful in terms of “what they can do.”

However, different languages have different strengths and weaknesses. For example, in C, if you have a space to hold someone’s name, you have to give it a size, writing something such as “char lastname[10]” because you have room for ten characters. BUT: (1) in C, a string of characters always ends with a NULL, so really that’s only room for 9 letters, and (2) it won’t ever get bigger. So if someone named “Fitzpatrick” comes along, it’ll get chopped off to “Fitzpatri” which isn’t good. In a language like Python, you can just have a space called “lastname” and it’ll hold however many characters somebody puts in, automatically growing as needed. And you might be thinking “Well, Python is clearly better!”, but in the actual hardware of the computer, it can’t make space get bigger like that. Python has to do a lot of stuff behind the scenes: it will make a space for a name, and if it doesn’t fit, it has to make a new space that’s bigger, and then move everything from the old space to the new space, and that takes time. So while Python is easier to write programs in, programs written in C often run faster.

And there are other things, too: in many languages, you have an “integer” type, which you can use to hold counting numbers. But you can make a programming mistake, such as “day = get_month”, and accidentally put the value for the month in the space for the day. In a language like Ada, you can say “day holds an integer of the day_type, and get_month returns an integer of the month_type,” and then if you make that programming mistake the compiler will stop and return an error message: you just stored a month where it expects a day. That makes writing the program even harder, because you have to declare different kinds of integers, but it also means that when you’re done, there are likely to be way fewer bugs, because the compiler can do extra checking. If you’re in a situation where “if this program doesn’t work right, people will die,” then you’re likely to want a language like Ada, which the extra effort is justified because people might die.

If by “more powerful” you mean “I can write more programs to do something in less time,” then a language like Python is obviously better than assembly language. But using a language like Python means you pay a bit of a speed penalty as compared to C, which maybe isn’t a problem if you have to do 100 things but what if you have to do a trillion things? And using a language like Ada will mean you write your program slower, but if people are going to die for mistakes you *should* be as careful and methodical as possible.

Anonymous 0 Comments

Each language is like a bag of tools. Different bags have different tools in them. Some bags are better for some problems than others.

Plumbing tools are good for plumbing, electrical tools are good for electrics. Some are good for lots of things, but not excellent at anything, others are excellent at one thing but useless for others. Some have tools that require expert training to use, some could be used by a toddler.

Anonymous 0 Comments

An example:

Some languages (none of the currenty widely used ones) don’t have loops or the concept of repetition but instead they have go-to-step-*x* commands. Others don’t have “goto” commands and others have both.

An exponentiation program without loops:

1. Ask user for $base.
2. Ask user for $exponent.
3. Set $result to 1.
4. If $exponent is 0 go to 8.
5. Subtract 1 from $exponent.
6. Multiply $result by $base.
7. Go to 4.
8. Display $result to user.

Equivalent program with *buildt-in* “language-level” loops:

– Ask user for $base.
– Ask user for $exponent.
– Set $result to 1.
– Do the following $exponent times:
– Multiply $result by $base
– Display $result to user.

Some language features that aren’t in *every* language include the ability to detect programmer errors before running, the ability to handle errors *while* running (Sometimes a language forbids you to do things that *could* be errors even if they aren’t. It has pros and cons.), the ability to use emojis as variable names, the ability to trade development effort for fine-grained control, the ability to use real (“decimal”) numbers as opposed to just integer numbers (JavaScript kind of *only* has real numbers), the ability to use very big numbers, the ability to define your own data types or control structures, automatic vs manual memory management. I could go on.

> Are they all based on the same thing?

Not *really*. You can think about algorithms in different ways and the languages reflect that. There are so called “programming language paradigms”. For example “functional programming” is inspired by mathematics, where a function always associates the same result to the same arguments and you can’t reassign variables. Other “procedural languages” just provide some convenient abstraction layers over giving the processor a series of commands “store this in here and then store that in there…”.

Similar languages can sometimes be translated into each other. Sometimes information is lost that way and sometimes the program get’s slower because of a translation.

In another sense they are equal as long as they are “turing complete”, which means that you can theoretically solve any mathematical problem with it. Or let’s say, you can create the same programs with them. Practically programs in some languages will be slower than others because it’s very difficult for the translation program to find an optimal machine-language equivalent.

You can invent your own programming language! You just have to describe very precisely what you want the processor to do and then create a translation program that translates your description into machine language or another programming language that already has a translator (“compiler”).

Anonymous 0 Comments

Each language is like a bag of tools. Different bags have different tools in them. Some bags are better for some problems than others.

Plumbing tools are good for plumbing, electrical tools are good for electrics. Some are good for lots of things, but not excellent at anything, others are excellent at one thing but useless for others. Some have tools that require expert training to use, some could be used by a toddler.

Anonymous 0 Comments

An example:

Some languages (none of the currenty widely used ones) don’t have loops or the concept of repetition but instead they have go-to-step-*x* commands. Others don’t have “goto” commands and others have both.

An exponentiation program without loops:

1. Ask user for $base.
2. Ask user for $exponent.
3. Set $result to 1.
4. If $exponent is 0 go to 8.
5. Subtract 1 from $exponent.
6. Multiply $result by $base.
7. Go to 4.
8. Display $result to user.

Equivalent program with *buildt-in* “language-level” loops:

– Ask user for $base.
– Ask user for $exponent.
– Set $result to 1.
– Do the following $exponent times:
– Multiply $result by $base
– Display $result to user.

Some language features that aren’t in *every* language include the ability to detect programmer errors before running, the ability to handle errors *while* running (Sometimes a language forbids you to do things that *could* be errors even if they aren’t. It has pros and cons.), the ability to use emojis as variable names, the ability to trade development effort for fine-grained control, the ability to use real (“decimal”) numbers as opposed to just integer numbers (JavaScript kind of *only* has real numbers), the ability to use very big numbers, the ability to define your own data types or control structures, automatic vs manual memory management. I could go on.

> Are they all based on the same thing?

Not *really*. You can think about algorithms in different ways and the languages reflect that. There are so called “programming language paradigms”. For example “functional programming” is inspired by mathematics, where a function always associates the same result to the same arguments and you can’t reassign variables. Other “procedural languages” just provide some convenient abstraction layers over giving the processor a series of commands “store this in here and then store that in there…”.

Similar languages can sometimes be translated into each other. Sometimes information is lost that way and sometimes the program get’s slower because of a translation.

In another sense they are equal as long as they are “turing complete”, which means that you can theoretically solve any mathematical problem with it. Or let’s say, you can create the same programs with them. Practically programs in some languages will be slower than others because it’s very difficult for the translation program to find an optimal machine-language equivalent.

You can invent your own programming language! You just have to describe very precisely what you want the processor to do and then create a translation program that translates your description into machine language or another programming language that already has a translator (“compiler”).

Anonymous 0 Comments

They are literally languages.

Same way that we have different human languages, dialects, even ways of speaking for different purposes (formal, informal, technical, simple, etc.), we have different programming languages.

We have languages for children (e.g. LOGO, BASIC).

We have languages more suitable for specific areas of mathematics (FORTRAN, Prolog, Haskell, R)

We have languages that are very efficient to convey the most technical information exactly (e.g. C) but which are difficult to understand.

We have languages that are easy to understand and learn but take longer or do to say things in them.

We have languages for every kind of niche, purpose, design, intention, etc.

You wouldn’t use the same language in a court of law or when explaining to your kids, you wouldn’t use the same language when talking to your petrol-head friends or rocket-scientists compared to when you’re explaining the exact same concepts to your grandma. It’s easy to learn a Latin language, but comparatively very tricky to learn Chinese. It’s more difficult to learn a second language that’s different from the one you already know, but easy to learn a similar language. And so on.

It’s the same with programming languages – and they are called languages for a reason. You have to learn to think and talk in them, to explain the same concept but using a different grammar, and often to translate between them and also translate back to your native language (e.g. English).

You’re trying to explain to a foreigner (the computer) how to do the thing you want it to do (execute your instructions) and you need to do it in a language which you both understand well enough for you to explain and for the computer to understand. And such languages will have trade-offs in terms of how quick it is to program (explain what you want it to do) as well as how fast the computer can actually execute the task you want it to (how fast it can “understand” what you want it to do), plus how fluent you are in the language you choose to use.

Additionally, ALL computer languages are further translated AGAIN to actual machine code. By the compiler/interpreter. So you’re actually talking through many translations, which can slow things down, complicate matters or in some circumstances mean that its easier for you to “learn to speak” machine code than it is to try to explain what you want the computer to do in an intermediary language that will slow everything down.

Hell, even the “language” of CPUs is different to the language of GPUs, and even between different CPUs!

It’s generally believed that learning a programming language is as difficult, and affects many of the same areas of the brain, as learning a new foreign spoken/written language. They are all just about trying to establish effective communication in a stranger’s language.

Anonymous 0 Comments

They are literally languages.

Same way that we have different human languages, dialects, even ways of speaking for different purposes (formal, informal, technical, simple, etc.), we have different programming languages.

We have languages for children (e.g. LOGO, BASIC).

We have languages more suitable for specific areas of mathematics (FORTRAN, Prolog, Haskell, R)

We have languages that are very efficient to convey the most technical information exactly (e.g. C) but which are difficult to understand.

We have languages that are easy to understand and learn but take longer or do to say things in them.

We have languages for every kind of niche, purpose, design, intention, etc.

You wouldn’t use the same language in a court of law or when explaining to your kids, you wouldn’t use the same language when talking to your petrol-head friends or rocket-scientists compared to when you’re explaining the exact same concepts to your grandma. It’s easy to learn a Latin language, but comparatively very tricky to learn Chinese. It’s more difficult to learn a second language that’s different from the one you already know, but easy to learn a similar language. And so on.

It’s the same with programming languages – and they are called languages for a reason. You have to learn to think and talk in them, to explain the same concept but using a different grammar, and often to translate between them and also translate back to your native language (e.g. English).

You’re trying to explain to a foreigner (the computer) how to do the thing you want it to do (execute your instructions) and you need to do it in a language which you both understand well enough for you to explain and for the computer to understand. And such languages will have trade-offs in terms of how quick it is to program (explain what you want it to do) as well as how fast the computer can actually execute the task you want it to (how fast it can “understand” what you want it to do), plus how fluent you are in the language you choose to use.

Additionally, ALL computer languages are further translated AGAIN to actual machine code. By the compiler/interpreter. So you’re actually talking through many translations, which can slow things down, complicate matters or in some circumstances mean that its easier for you to “learn to speak” machine code than it is to try to explain what you want the computer to do in an intermediary language that will slow everything down.

Hell, even the “language” of CPUs is different to the language of GPUs, and even between different CPUs!

It’s generally believed that learning a programming language is as difficult, and affects many of the same areas of the brain, as learning a new foreign spoken/written language. They are all just about trying to establish effective communication in a stranger’s language.

1 2 6 7 8