I understood it to be true but struggled with it for a while. How does the decimal .333… so easily equal 1/3 yet the decimal .999… equaling *exactly* 3/3 or 1.000 prove so hard to rationalize? Turns out I was focusing on precision and not truly understanding the application of infinity, like many of the comments here. Here’s what finally clicked for me:
Let’s begin with a pattern.
1 – .9 = .1
1 – .99 = .01
1 – .999 = .001
1 – .9999 = .0001
1 – .99999 = .00001
As a matter of precision, however far you take this pattern, the difference between 1 and a bunch of 9s will be a bunch of 0s ending with a 1. As we do this thousands and billions of times, and infinitely, the difference keeps getting smaller but never 0, right? You can always sample with greater precision and find a difference?
Wrong.
The leap with infinity — the 9s repeating *forever* — is the 9s *never* stop, which means the 0s *never* stop and, most importantly, the 1 *never* exists.
So 1 – .999… = .000… which is, hopefully, more digestible. That is what needs to click. Balance the equation, and maybe it will become easy to trust that .999… = 1
Latest Answers