Why do we use Greek letters, like alpha (α), beta (β), and gamma (γ), in mathematical notation?

318 viewsMathematicsOther

Why do we use Greek letters, like alpha (α), beta (β), and gamma (γ), in mathematical notation?

In: Mathematics

6 Answers

Anonymous 0 Comments

Greek letters are used in math because it helps to distinguish between different types of variables and constants. They offer a wider range of symbols than the Roman alphabet, which is already heavily used for other purposes (like standard variables). Plus, it’s a tradition that dates back to ancient times when Greek mathematicians made significant contributions to the field.

Anonymous 0 Comments

Because sometimes you want to have a variable stand out more, or you used all of the Latin alphabet. For example, in electromagnetics, electric permittivity is lowercase epsilon because e is almost always taken to be Euler’s number.

It doesn’t matter; variable symbols can be anything you want, as long as you can get the point across. I once had a professor use wingdings as variables just to prove that point.

Anonymous 0 Comments

When we use variables, there is only so many letters that we can use. As such, when you have a very specific or special variable, you may want to give it something to make it stick out more then an arbitrary number

Anonymous 0 Comments

Tradition mostly. 26 letters in the Roman alphabet may seem like a lot – most specific mathematical applications don’t really need 26 distinct variables, especially if you allow for subscripts, superscripts, or other adornments like hat, bar, and tilde – but each bit of math exists in a far broader context. For example, you should always think twice before using ‘f’ for anything other than a function or using ‘e’ in any context where it can be confused for Euler’s number. These constraints get even tighter in specific fields. For example, the following will likely make any statisticians in the audience cry blood (you have been warned):
X = sY + a

and just to cleanse the palate:

Y = βX + Ɛ

Greek letters give a little bit more flexibility so that “reserved” notation doesn’t eat up quite so much of the space of possibilities. In fact, it’s quite nice for that reserved notation to be mostly Greek letters because it frees up Roman letters for easy connections between application specific parameters. (Physicists look away) If you want a variable for ‘cost,’ it’s nice to have ‘c’ available.

Anonymous 0 Comments

There’s a lot of tradition to reserve certain letters to mean certain things. For example, we reserve F and C to mean Fahrenheit and Celsius outside of math. In math they can reserve ‘ε’ to usually mean something or ‘ζ’ to mean something (god I hated writing that weird ass Z)

They don’t have to. At the start of your assignment, exam, or paper, you can say “actually I will use G to mean what ζ usually means” but it’s just extra work to break a standard for no real gain.

Like if someone said I’m going to use “D” to represent Celsius temperature instead. Why bother?

Anonymous 0 Comments

We use different alphabets and characters written differently to be able to avoid confusion about what letter means n a given context.

Since the whole tradition got startet when Latin and Greek were still a standard part of a proper education in Europe, they were characters every knew and recognized and that could be printed with existing types.

Note that we rarely use Greek letters that look too much like Roman letters in Math and Physics.

We also use stuff like Blackboard Bold letters like ℚ, ℝ and ℤ to get more sets of characters than just Roman and Greek in large and small caps. In rare cases characters from other alphabets and abjads and other character sets like ℵ the Hebrew letter Aleph used for differentiating different sizes of infinity.