What makes different programming languages “better” than others? Or more powerful? Why have different languages developed over time? Are they all based on the same thing?

1.16K views

What makes different programming languages “better” than others? Or more powerful? Why have different languages developed over time? Are they all based on the same thing?

In: 187

78 Answers

Anonymous 0 Comments

Better is subjective to the task, powerful is subjective to the task, different language developed to specific tasks, and yes. they are all essentially based on the same thing, which is machine code.
To provide more context.
Java was made to makes it simpler to develop and maintain software for network devices, mostly in the form of embedded systems.
C# was made to replace java.
C++ was made to optimize resource usasge.
PHP was made cause C alone was a bit complicated, and PHP streamlined the process of making webpages.
MySQL was designed to manage massive databases.
And sure, you can manage massive databases in C#, C++, PHP and Java, but MySQL does it better. The only time you dont use a hammer when you need a hammer, is when you dont have a hammer, and make due with what you have.

Anonymous 0 Comments

Better is subjective to the task, powerful is subjective to the task, different language developed to specific tasks, and yes. they are all essentially based on the same thing, which is machine code.
To provide more context.
Java was made to makes it simpler to develop and maintain software for network devices, mostly in the form of embedded systems.
C# was made to replace java.
C++ was made to optimize resource usasge.
PHP was made cause C alone was a bit complicated, and PHP streamlined the process of making webpages.
MySQL was designed to manage massive databases.
And sure, you can manage massive databases in C#, C++, PHP and Java, but MySQL does it better. The only time you dont use a hammer when you need a hammer, is when you dont have a hammer, and make due with what you have.

Anonymous 0 Comments

Some languages are better suited to particular tasks than others, basically (any decent programmer ought to be able to pick up just about any language with appropriate training). As examples, there are times when you want to work directly with actual locations in computer memory. Equally, there are times when that’s the last thing you want – you want to be able to install your code on a completely different design of computer, so you DON’T want to be making any assumptions about how your computer works. Or there are times when you want to do very mathematical things, so you want to use a language that understands and supports that sort of thing; but equally there are times when what you want to do is mostly business logic, which tends to be more about shuffling information about from (say) one file/database to another, only doing mostly fairly basic arithmetic. Or there are times when you really want to be heavily in charge of how fast the code is working, and even precisely when the code does particular things, which may mean doing “clever”, complicated things in a language designed for that sort of task – but equally the biggest programming cost for most businesses is maintaining and changing old code, so you also want it to be as easy to understand and modify well as possible. Oh, and you don’t want to spend a fortune training every new programmer up in something obscure they’ve never met before, ideally, but nor do you want the (significant) costs that come with using lots of different languages within the same business.

It’s horses for courses, basically. You pick a set of languages that will broadly meet your needs.

Anonymous 0 Comments

Some languages are better suited to particular tasks than others, basically (any decent programmer ought to be able to pick up just about any language with appropriate training). As examples, there are times when you want to work directly with actual locations in computer memory. Equally, there are times when that’s the last thing you want – you want to be able to install your code on a completely different design of computer, so you DON’T want to be making any assumptions about how your computer works. Or there are times when you want to do very mathematical things, so you want to use a language that understands and supports that sort of thing; but equally there are times when what you want to do is mostly business logic, which tends to be more about shuffling information about from (say) one file/database to another, only doing mostly fairly basic arithmetic. Or there are times when you really want to be heavily in charge of how fast the code is working, and even precisely when the code does particular things, which may mean doing “clever”, complicated things in a language designed for that sort of task – but equally the biggest programming cost for most businesses is maintaining and changing old code, so you also want it to be as easy to understand and modify well as possible. Oh, and you don’t want to spend a fortune training every new programmer up in something obscure they’ve never met before, ideally, but nor do you want the (significant) costs that come with using lots of different languages within the same business.

It’s horses for courses, basically. You pick a set of languages that will broadly meet your needs.

Anonymous 0 Comments

Each programming language has different properties, making it better at some things, worse at others – and in the end, it also boils down to the programmer’s preference.

An awesome language that few people can code may be useless because you won’t be able to find people able to work on your program.

There are already existing pieces of code (called “libraries”) that help you do certain common tasks – e.g. downloading files from the Internet, or talking to a specific device. If there are libraries for doing what you want in one language but not another, you will likely choose the first one because otherwise you’d have to re-create those.

Some of these libraries come with the standard package that you get when you download that language, others you have to find yourself – that can make some languages with bigger “standard libraries” more convenient.

A big part is static vs. dynamic typing and compiled vs. interpreted.

Compiled means you write your code, then run a program to turn it into an EXE file, then run that EXE file. Interpreted languages often let you run each line by itself easily, which makes experimentation a bit easier, but they also tend to be much slower. But for a small tool that doesn’t do anything complex, that doesn’t make a difference, and interpreted languages can be easier to write.

Interpreted languages tend to be dynamically typed, while compiled languages tend to be statically typed. When you program, you have variables – boxes that you can put data into. In a statically typed language, you have to write on the box what kind of data goes inside, and you can only put that type inside. In a dynamically typed language, you just do whatever.

That means you may have to write

string a = ‘abcd’
int b = 123

in one language (string means text, int means number), and in another, it’d just be

a = ‘abcd’
b = 123

Much more convenient! But what if you do “a + b”? A statically typed language might decide “that doesn’t work, you can’t add text and a number, silly!” and if it’s a compiled language, it will tell you when you try to compile (build) your program. A dynamically typed language may start running the program, and only when it hits the “a + b” part crash.

As a result, statically typed tends to be better for large complicated programs, because such mistakes are easier to catch. For example, if you do

a = “1234”
b = 2

and try to add them, python will crash and say it can’t do that. Javascript will helpfully decide “ah, the second thing isn’t a text, let me convert it” and give you “12342” (it’ll join the two texts). Which is convenient for example if you want to do

alert(“Your number is: ” + b)

but very bad if you didn’t intend the first number to be a text.

Go will tell you “you can’t do that” as you try to build your program (not wait until you run it) so you can fix it early.

C… C is special. It is storing the text “1234” as the address of the memory where it is stored. And an address is a number. So you can add 2 to it… which means the result is “34” (a good compiler should warn you about it, but you may miss the warnings among the many other warnings programs tend to generate).

If the number you were adding was 5 instead, C would conveniently open a gate to hell for you. (The new address would point to a random piece of memory, and then random things might happen – and if someone else gets to choose both values, they may be able to use that to trick your program into downloading and running a virus – when you read “buffer overflow vulnerability”, that’s basically what happened!)

In C, you have to be a lot more specific about what you want done – it’ll take you a lot of time to write something that would be two lines of code in Python – but it will run much faster and what the computer does internally will be much more under your control.

And regarding the adding numbers to text thing… if I remember correctly, PHP will try to guess. That’s great, because `”12″+3` is `15` but `”Your number is: ” + 3` is `”Your number is: 3″`. But it also sucks because if i remember correctly there was something like PHP thinking that `1` is equal to `1abcd` (because the first is a number, so the second gets converted to a number, and it’s a 1 followed by garbage, so it’s a 1…)

Anonymous 0 Comments

Each programming language has different properties, making it better at some things, worse at others – and in the end, it also boils down to the programmer’s preference.

An awesome language that few people can code may be useless because you won’t be able to find people able to work on your program.

There are already existing pieces of code (called “libraries”) that help you do certain common tasks – e.g. downloading files from the Internet, or talking to a specific device. If there are libraries for doing what you want in one language but not another, you will likely choose the first one because otherwise you’d have to re-create those.

Some of these libraries come with the standard package that you get when you download that language, others you have to find yourself – that can make some languages with bigger “standard libraries” more convenient.

A big part is static vs. dynamic typing and compiled vs. interpreted.

Compiled means you write your code, then run a program to turn it into an EXE file, then run that EXE file. Interpreted languages often let you run each line by itself easily, which makes experimentation a bit easier, but they also tend to be much slower. But for a small tool that doesn’t do anything complex, that doesn’t make a difference, and interpreted languages can be easier to write.

Interpreted languages tend to be dynamically typed, while compiled languages tend to be statically typed. When you program, you have variables – boxes that you can put data into. In a statically typed language, you have to write on the box what kind of data goes inside, and you can only put that type inside. In a dynamically typed language, you just do whatever.

That means you may have to write

string a = ‘abcd’
int b = 123

in one language (string means text, int means number), and in another, it’d just be

a = ‘abcd’
b = 123

Much more convenient! But what if you do “a + b”? A statically typed language might decide “that doesn’t work, you can’t add text and a number, silly!” and if it’s a compiled language, it will tell you when you try to compile (build) your program. A dynamically typed language may start running the program, and only when it hits the “a + b” part crash.

As a result, statically typed tends to be better for large complicated programs, because such mistakes are easier to catch. For example, if you do

a = “1234”
b = 2

and try to add them, python will crash and say it can’t do that. Javascript will helpfully decide “ah, the second thing isn’t a text, let me convert it” and give you “12342” (it’ll join the two texts). Which is convenient for example if you want to do

alert(“Your number is: ” + b)

but very bad if you didn’t intend the first number to be a text.

Go will tell you “you can’t do that” as you try to build your program (not wait until you run it) so you can fix it early.

C… C is special. It is storing the text “1234” as the address of the memory where it is stored. And an address is a number. So you can add 2 to it… which means the result is “34” (a good compiler should warn you about it, but you may miss the warnings among the many other warnings programs tend to generate).

If the number you were adding was 5 instead, C would conveniently open a gate to hell for you. (The new address would point to a random piece of memory, and then random things might happen – and if someone else gets to choose both values, they may be able to use that to trick your program into downloading and running a virus – when you read “buffer overflow vulnerability”, that’s basically what happened!)

In C, you have to be a lot more specific about what you want done – it’ll take you a lot of time to write something that would be two lines of code in Python – but it will run much faster and what the computer does internally will be much more under your control.

And regarding the adding numbers to text thing… if I remember correctly, PHP will try to guess. That’s great, because `”12″+3` is `15` but `”Your number is: ” + 3` is `”Your number is: 3″`. But it also sucks because if i remember correctly there was something like PHP thinking that `1` is equal to `1abcd` (because the first is a number, so the second gets converted to a number, and it’s a 1 followed by garbage, so it’s a 1…)

Anonymous 0 Comments

Lex Fridman had a great conversation with Guido van Rossum recently (BDFL: Python) where they talked about the concept of a programming language’s impact on usage and users.

It is a long interview but for folks interested in languages, it is a great conversation.

[They get into it here](https://youtu.be/-DVyjdw4t9I?t=361)

Anonymous 0 Comments

Lex Fridman had a great conversation with Guido van Rossum recently (BDFL: Python) where they talked about the concept of a programming language’s impact on usage and users.

It is a long interview but for folks interested in languages, it is a great conversation.

[They get into it here](https://youtu.be/-DVyjdw4t9I?t=361)

Anonymous 0 Comments

There’s a sliding scale of “Easy to code, more intensive to run,” to “Harder to code, easier to run.”

Certain programming languages makes coding in them easier, but some of the processing power needs to be used to turn the code into machine language the processor can use, so they aren’t suited for certain tasks.

Others are more tedious to code in, but runs faster due to being “closer” to what a processor uses.

That’s also why you’ll see things like video games using multiple programming languages at the same time; certain languages are faster, others are easier to code in.

Anonymous 0 Comments

There’s a sliding scale of “Easy to code, more intensive to run,” to “Harder to code, easier to run.”

Certain programming languages makes coding in them easier, but some of the processing power needs to be used to turn the code into machine language the processor can use, so they aren’t suited for certain tasks.

Others are more tedious to code in, but runs faster due to being “closer” to what a processor uses.

That’s also why you’ll see things like video games using multiple programming languages at the same time; certain languages are faster, others are easier to code in.