For me, it’s like actual values are over there —>
<— and how we write them is over here
They’re two separate things, and what we’re ending up with is a situation where “how we write them” doesn’t quite work with the actual value. 0.999… is the same as 1. It’s an artifact of decimal notation.
While we’re at it, it’s kind of broken that the vast, vast majority of values cannot be expressed as a decimal. Or a fraction. That sort of notation works for a miniscule fraction (ha) of all the numbers there are. Like approaching 0% of all numbers. That’s why we use symbols like e and pi, because we literally can’t write them.
Latest Answers