A bit of over-simplified history:
The earliest computers were millions of relays and literally required to have the circuits rewired for each “algorithm “. These computers would malfunction when literal bugs would get in the circuits. That’s where we get the term “bugs” for software glitches.
By the time we got to vacuum tube computers, we had punched card input. Where a hole would represent a closed switch or a “1” and no hole would represent an open switch or a “0”
These 1’s and zero’s formed the earliest computer language which came to be known as “machine language “
Obviously humans can’t read or understand machine language easily. So once we get to IC based electronic computers, we invent three letter mnemonics for all the common commands. Eg ADD, SUB (subtract), MOV (move), etc. This is known as assembly language and requires an assembler to translate to machine language that the computer can understand.
Once we got to this point, it was obvious that computers need to understand humans instead of humans learning to “write” machine code.
This started a whole host of high level languages that humans could easily understand and code. Different languages were developed to cater for different professional needs. Fortran became the language of choice for physicists and mathematicians, COBOL for banking, C for software developers, etc. There are mire specialized languages that you will barely hear about like ANSYS APDL for stress analysis professionals or MATLAB programming language for people that work with matrices.
Each of these languages has commands and tools that make it easy for professionals in that field to do their job.
But essentially all of these languages are then translated to machine language. Each language is designed to be most efficient for a particular task.
It’s like using a hammer drill vs impact wrench vs drill driver. All of them look similar and can be used interchangeably but each one is designed for a different professional.
Latest Answers