It actually happens with all odd numbers that don't happen to divide 10 (which is where the base10 things comes up). But I think that it's a matter of the origin of the 0.9999... I don't think that 3/3 is ever actually 0.9999... but rather is just a "graphical glitch" of base 10 math. It doesn't happen in base12 with 1/3, but 1/7 still does.
I do accept that we can just presume 0.999... can just be assumed 1 due to how common 3*(1/3) is. But I do think it throws a wrench in other parts of math if we assume it's universally true. Just like in programming languages... primarily float math that these types of issues crop up a lot, we don't just assume that the 3.999999... is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in. I have no reason to believe that this isn't the case for our base10 numbering systems either.
Limits don't disprove this at all. In order to prove 0.999...=1 you need to first define what 0.999... even means. You typically define this as an infinite geometric series with terms 9/10, 9/100, 9/1000 and so on (so as the infinite sum 9/10+9/100+9/1000+...). By definition this is a limit of a sequence of partial sums, each partial sum is a finite geometric sum which you would typically rewrite in a convenient formula using properties of geometric sums and take the limit (see the link).
The thing is that it follows from our definitions that 0.999... IS 1 (try and take the limit I mentioned), they are the same numbers. Not just really close, they are the same number.
What you're saying here isn't actually true because -0.999... and -1 are the same number. -0.9, -0.99, -0.999 and so on are not holes, but -0.999... is a hole, because it is the number -1.
You see the distinction here? Notations -0.9, -0.99, -0.999 and so on are all defined in terms of finite sums. For example -0.999 is defined in terms of the decimal expansion -(9/10+9/100+9/1000). But -0.999... is defined in terms of an infinite series.
The same sort of reasoning applies to your other decimal examples.
You take limits of functions. The first limit is the limit of a function f that, according to the diagram of the problem, approaches 1 as x goes to 1. But the second limit is the limit of a constant function that always maps elements of its domain to the value 2 (which is f(1)). You can show using the epsilon delta definition of the limit that such a limit will be equal to 2.
The notation here might be a little misleading, but the intuition for it is not so bad. Imagine the graph of your constant function 2, it's a horizontal line at y=2.
But I think that it's a matter of the origin of the 0.9999...
This is correct. It follows directly from the definition of the notation 0.999... that 0.999...=1.
I don't think that 3/3 is ever actually 0.9999... but rather is just a "graphical glitch" of base 10 math. It doesn't happen in base12 with 1/3, but 1/7 still does.
Then you are wrong. 3/3 is 1, 0.999... is 1, these are all the same numbers. Just because the notation can be confusing doesn't make it untrue. Once you learn the actual definitions for these notations and some basic facts about sums/series and limits you can prove for yourself that what I'm saying is the case.
I do accept that we can just presume 0.999... can just be assumed 1 due to how common 3*(1/3) is.
It's not an assumption or presumption. It is typically proved in calculus or real analysis.
But I do think it throws a wrench in other parts of math if we assume it's universally true. Just like in programming languages... primarily float math that these types of issues crop up a lot, we don't just assume that the 3.999999... is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.
It definitely doesn't throw a wrench into things in other parts of math (at least not in the sense of there being weird murky contradictions hiding in math due to something like this). Ieee floats just aren't comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there's only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999..., there aren't just a lot (but finitely many) 9's after the decimal point, there are infinitely many 9's after the decimal point.
In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.
I have no reason to believe that this isn't the case for our base10 numbering systems either.
The issue here is that you don't understand functions, limits, base expansions of numbers or what the definition of notation like 0.999... actually is.
The thing is that it follows from our definitions that 0.999… IS 1 (try and take the limit I mentioned), they are the same numbers. Not just really close, they are the same number.
You cannot use the outcome of a proof you're validating as the evidence of the validating proof. Prove that the limits work without a presumption that 0.999... = 1. Evaluate a limit where there's a hole in the function for 1... then prove that 0.999... also meets that hole without the initial claim that 0.999... = 1 since that's the claim we're testing.
The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.
So you you tell me I don't understand things... when you've not provided proof of anything other than just espousing that 0.999... = 1.
And I know how to work with floats in a programming context. It's the programming context that tells me that there could be a case where the BASE10 notation we use simply does "fit" the proper evaluation of what 1/3 is. Since you know... Base12 does. These are things I've actually already discussed... and have covered. But you're cherry picking trying to make me look dumb when instead you've just added nothing to the conversation.
Genuinely curious about this one, what function are you assuming when using the limit approach to evaluate? I presume it is f(x) = x, but then it would not have a discontinuity at 1. Or is the point that whether 0.999... = 1 or not depends on the implicit function in the context (in which case, limits wouldn't disprove the argument but rather add nuance to it)?