Many here have given explanations of how can you prove that, but stepping back a bit, you’ll want to understand that the decimal expansion method of representing a real number is just an arbitrary convention we chose to give names to real numbers. There’s the pure abstract concept of a real number (defined by [the axioms](https://en.wikipedia.org/wiki/Real_number#Axiomatic_approach)), and then there’s the notation we use to represent them using strings of symbols.
And an unavoidable property of decimal encoding is there are multiple decimal representations for the same real number.
For example, `0.999…`, `1.0`, `1.00`, `1.000`, etc. are all decimal representations of the same mathematical object, the real number that’s also called by its more common name `1`.
The meaning of 0.999… depends on our assumptions about how numbers behave. A common assumption is that numbers cannot be “infinitely close” together. With these rules, 0.999… = 1 since we don’t have a way to represent the difference. If we allow the idea of “infinitely close numbers,” then yes, 0.999… can be less than 1. Those numbers would be infinitesimals.
Infinitesimals are quantities that are closer to zero than any standard real number but are not zero. They do not exist in the standard real number system but can exist in other number systems such as the surreal number system and the hyperreal number system. Infinitesimals were introduced in the development of calculus, where the derivative was first conceived as a ratio of two infinitesimal quantities. However, as calculus developed further, infinitesimals were replaced by limits, which can be calculated using standard real numbers.
tldr: 0.999… both does and does not equal 1 depending on how you evaluate the expression. It’s a neat thought experiment but in most any real world application you would place reasonable limits to avoid the complexities of infinity.
Aside from the various mathematical reasons, what’s important to understand is that decimal representation is just that: a “representation” of the number, NOT the “true” number itself. For example the same number 1 is also 0.FFFFFFF… in hexadecimal. In fact there are infinitely many possible representations for every real number with the arguable exception of 0.
Decimal is a human invention, and like ~~all~~ most human inventions it isn’t perfect because it doesn’t have an exact 1-to-1 relationship with the real numbers. Some real numbers have one representation in decimal, others (those that are an integer multiple of a power of 10) have two, although by convention the terminating one (without the infinite sequence of 9s) is considered the “correct” one.
So what is the “true” real number itself, the unique essence of the number as opposed to its representation in decimal, binary, hexadecimal or any other base? That’s part of the beauty of mathematical ideas like numbers, we can imagine the pure concept of a number, but to write it down or say it you have to choose a way of representing it, of which there are infinitely many.
Not sure how appropriate it would be for a 5 year old, since I’ve seen many adults who struggle with the concept, but then again it might be because of no one explaining it to them when they were 5, and them being stuck with it inertially… so here goes.
It is important to understand, that a number is _different_ from the way you write it down. 1.5, 1.5000, 1 1/2 and 3/2 are different ways of writing the same number – the same point on an axis. Once that is established, you can say – see, fundamentally, 1 and 0.(9) are two different ways of representing the same number, and after that you use one of the many proofs available for that.
To clear up some misunderstanding, it is important to know that with such infinite notations, we are really looking at limits; 0.99999…. is really a limit of the sequence 0.9, 0.99, 0.999,….,
that is: 0.99999… = lim_{n to infty} sum_{i=1}^n (9/(10^i)) ([notation](https://www.wolframalpha.com/input?i=lim_%7Bn+%5Cto+%5Cinfty%7D+%5Csum_%7Bi%3D1%7D%5En+%289%2F%2810%5Ei%29%29))
the sequence itself contains no entries which are 1, but the limit doesnt have to be in the sequence
at every added decimal, the difference to 1 shrinks by a factor of 10, this is convergence, so the limit, being 0.999… can only be exactly 1
This is far, far, far simpler than it sounds.
The easy and unsatisfying answer is: “because we’ve decided that’s what infinity means.” Which sounds dumb, but it’s actually kinda deep.
Infinity doesn’t exist in the real world; it’s not an actual number. It’s just an idea. It’s the answer to a question. Or rather, infinity is the question itself.
The question is: “what happens if you never stop?” That’s infinity. Infinity is the question asking what happens when you don’t ever stop.
So, if you say: `0.999…` you’re not saying the same thing as `1`, because 1 is a number while 0.999… is an infinite series. In other words: 1 is an answer, while 0.999… is a question.
The question is: “what happens when you keep adding 9’s?” And the answer is: “you get closer and closer to 1.”
Or in more formal terms: “the infinite series 0.999… approaches 1.” And because math people like simple answers, you can write the previous statement simply as “0.999… = 1”. Which, since we know that 0.999… deals with infinity, we know that one side is the question and the other side is the answer.
Latest Answers