Something that you’re not really taught in school is that modern mathematics is predicated on there being some margin of error for your calculations. For what you’re going to be doing in most normal people applications, that margin of error is small enough to be irrelevant to you.
So, for example, the formula 1+1 has a margin of error that is so close to 0 that you can say, for all practical purposes, that the answer is 2 (even though there is a non-zero chance that it *might* not be). The same is true of 1/3. Its easy for you to take a pizza and divide it up into 3 “equal” parts, despite the fact that its isn’t actually possible to divide something into 3 *perfectly* equal parts.
The reason for that is that there is a margin of error when you divide something into 3 parts. That margin of error is small enough that for all practical purposes, the pizza has been divided into 3 equal parts – despite the fact that we know that one of the parts is microscopically larger than the others.
The same is true of calculations that incorporate the concept of infinity. Your calculation has a margin of error. If that margin of error is large enough that using any arbitrarily large number will result in the same outcome, then infinity can function as a stand in for any such number.
In other words, infinity is basically a concept that allows calculations to be made when you don’t actually care what the exact number that you’re working with is.
This is also why infinity is smaller than infinity+1. That’s just basically a fancy way of saying: “I don’t care what the first or second numbers are, so long as the second number is 1 higher than first.”
Latest Answers