Many here have given explanations of how can you prove that, but stepping back a bit, you’ll want to understand that the decimal expansion method of representing a real number is just an arbitrary convention we chose to give names to real numbers. There’s the pure abstract concept of a real number (defined by [the axioms](https://en.wikipedia.org/wiki/Real_number#Axiomatic_approach)), and then there’s the notation we use to represent them using strings of symbols.
And an unavoidable property of decimal encoding is there are multiple decimal representations for the same real number.
For example, `0.999…`, `1.0`, `1.00`, `1.000`, etc. are all decimal representations of the same mathematical object, the real number that’s also called by its more common name `1`.
Latest Answers