What underlying assumptions does math make?

558 views

I’m learning algebra and trigonometry and I’m finding it pretty interesting. Something that I’ve noticed is that math is basically logical arguments added to other logical arguments. I’ll read an explanatory text and I’ll be like, “Oh shit, yeah that HAS to be true”.

But if you go back far enough, what are things that math assumes to be true but can’t necessarily prove?

In: Mathematics

8 Answers

Anonymous 0 Comments

That the numbers have equal value and this doesn’t always work. You have a pig I have a pig. We’re equal in pigs since 1 = 1 and not greater or less than. But one pig can be less. Weight is important when selling livestock. Is it breeding age? Yet one pig does equal one pig even though pigs aren’t equal.

Anonymous 0 Comments

https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory

this is the link you want, but unless you have done a ton of set theory good luck 🙂

I’ll give it a go though, basic idea is
1) 1= 1 and not any other number or anything else
2) 1 + 1 = always is a number as opposed to a hat
3) 1, 2, 3… keeps going and does not repeat eg. you don’t get 1 again

does that make sense?

Anonymous 0 Comments

The term you are looking for is *axioms*. Every form of math is built on certain principles, like discrete numbers, or addition being reciprocal, and so on. Those statements or propositions on which an abstractly defined mathematical structure is based are called axioms.

Anonymous 0 Comments

There are no assumptions that all fields of math universally count as true. You can always formulate some formal system excluding any rule you want.

Nearly all useful math will follow the axioms of [first-order logic](https://en.wikipedia.org/wiki/First-order_logic), potentially without [law of excluded middle](https://en.wikipedia.org/wiki/Law_of_excluded_middle). These rules allow you to make and analyze true/false statements about objects. Rejecting these axioms is possible, but hasn’t yielded anything interesting in and of itself, only things that are interesting because they are consistent and do not follow the common axioms.

Anonymous 0 Comments

The [standard axioms](https://en.m.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) are too complicated for any ELI5 answer. Not all mathematicians start from this point, but many do. You may also like to read about the [necessity](https://en.m.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems) of unprovable statements in mathematics.

Anonymous 0 Comments

I like to say that math up to a good portion of Algebra is nothing but shorthand for really long addition problems.

Addition: 10 + 20 = 30. Example: you have a pile on your desk with two checks, one for $10 and one for $20. You have $30.

Subtraction: 30 – 20 = 10; same as 30 + (-20) = 10. Example: while looking at your pile of checks, someone places a bill on top that says you owe $20. You add that “negative” amount and realize you have $10.

Multiplication: 10 × 5 = 50; same as 10 + 10 + 10 + 10 + 10, but a lot easier to write or figure out in your head.

Division: 10 ÷ 5 = 2. This is just backwards multiplication, and the first algebraic operation a student typically learns. 10 ÷ 5 = x is the same as 5x = 10, which is the same as saying “How many times do I add ‘5’ to get ’10?'”

Most of algebra involves breaking a complex and difficult equation into something as simple as what I’ve listed up above.

Anonymous 0 Comments

In the early 1900s, a mathematician named David Hilbert was worrying about exactly your question. He proposed a set of proofs that people should try to work out, which would establish the logical foundations of math, and then we could connect all our other math knowledge to those foundations and know we were on solid ground.

They tried, and had trouble, and started to think maybe they couldn’t do it. Then a logician named Godel actually *proved* that they couldn’t do it. (I think the basic idea is that, just using arithmetic, you can’t prove that arithmetic is valid. You need symbolic logic. But wait, just using symbolic logic you can’t prove that symbolic logic is valid; you need [next level]. And that level can’t prove itself either…)

If you want to read more about this, the proposed proofs were called the Hilbert Program and the demonstration that it was doomed is called Godel’s Incompleteness Theorem.

Edit: Before the Hilbert program, Giuseppe Peano made a very respectable try at the same thing which is called Peano Arithmetic. Here it is, and it’s neat to see how deliberately and thoroughly he builds it up, but even this has flaws. (I don’t know what they are, but the pros agree they exist.) https://en.wikipedia.org/wiki/Peano_axioms

Anonymous 0 Comments

A lot of people have asked questions about this over the years. What you can prove and how you can prove it is a large field of debate in Philosophy. It ends up a little paradoxical because when you come up with a notion about how people can prove things, how do you prove that it’s true without people already accepting it?

So usually what ends up happening is we try to identify the assumptions that we’ve already made by breaking up our ideas into the smallest, most general and most elegant ideas possible. Finding a simpler set of assumptions that still lets us use our logic and science is a good way of getting fame and fortune in the world of philosophy.

These individual assumptions which we can’t prove without assuming that they’re true are called axioms. There’s no logical way to prove them, but the ones we have at the moment seem to work pretty well because maths and science keeps designing bridges that don’t collapse and sending rockets to the moon and other useful things that makes it worth taking a few minor assumptions on faith.

Whenever you read a mathematical proof, you just have to assume that there’s a hidden piece of text at the beginning that says “assuming all our axioms are true…” because all of the logic and conclusions that are drawn throughout the proof will be based on those underlying assumptions.

The actual axioms themselves are hard to describe in common English because we’ve refined them down so heavily to the exact ideas that can apply generally to everything that you can only really write most of them down in symbolic logic with whole chapters explaining what each element means. You can give rough approximations, things like “1 is always 1, wherever it’s used”, “Numbers are a sequential method of measurement with a consistent internal relationship” but even then they’re not usually as accurate and it’s not exactly clear easy English.

The good thing is that even though it’s the foundation of maths, you don’t actually have to understand it in order to do maths. It’ll help answer really strange questions like “but why can’t I divide by zero” and “do irrational numbers really never end” but if you’re willing to just accept those rules and leave the arguments to other people you won’t usually need to understand any of them.