"See for example Euler, who made great strides in the development of calculus without any really defined concepts of convergence, divergence or limits, but who doesn’t appear here at all."
Can anyone say more about this? I always suspected this was so, because the end of high school / start of uni is roughly that sort of time period, and I always thought there was something about convergence/limits that was missing, it seemed very ad hoc.
As I understand it our modern notion of what rigorous mathematics is didn’t exist back then. The justification for analysis was basically physical intuition and simply that it worked.
Euler and company just worked directly with the intuition of infinitesimal quantities. To be fair though, infinitesimals are the essential intuition for how calculus works anyway.
Personally I don’t really care about constructions of the real numbers or technical details of calculus. It’s enough for rigorous mathematics to know that models exist of the theories which produce calculus; that is models of complete ordered fields and even plain old ordered fields with infinitesimals.
In high school mathematics, you're not really given the definition of a limit. Consider the definition of the derivative
limit as h -> 0 of (f(x+h) - f(x)) / h.
That's well-defined on (0, inf) but not on [0, inf). So you can't just evaluate at h=0 and be done with it.
The intuition is 'as h gets smaller and smaller, the ratio gets closer and closer to a new function of x'. But many high-school students aren't given a clear definition of what it means for one function to be 'close' to another, or what it means for x to 'get smaller and smaller'.
To see the confusion more clearly, try having the debate about whether 0.999... = 1 with someone who doesn't understand what a limit is.
Let n = 0.999... Then multiply both sides of the equality by 10 so that we have 10n = 9.999...Then subtract n from both sides of the resulting equality to get 9n = 9.000...Finally, divide both sides by 9 and voila we have n = 1 which is what we wanted to show.
My gut feeling is that that proof isn't quite correct, since you haven't used the notion of a limit anywhere. There's a fundamental fact about convergence of geometric series that you need to use.
I think your proof goes wrong since you haven't justified how arithemtic operations work with infinite decimals. AFAIK the only way to add non-terminating decimals is to convert them to fractions (or sequences of fractions as with pi, e, etc), add the fractions, and convert them back. So if you convert 0.999... and 9.999... to fractions, you've assumed the conclusion.
To play devil's advocate, I can try to rephrase your proof without infinite decimal arithmetic as follows.
Assume
n = 0.999... = 1 - epsilon, where epsilon is 'infinitesimal' (an ill-defined version of not-quite-zero). We'd like to show that epsilon is zero.
10n = 9.999 = 10 - 10epsilon
9n = 9.999 - (1 - epsilon) = 9 - 9epsilon
9n = 8.999 + epsilon = 9 - 9epsilon
The only way to get the epsilons to cancel is to assume epsilon = 0, which is to assume the conclusion.
This one is better than the other child comment, but you still need to show that the limit of a sum is the sum of the limits. Not as easy as you might think, and not usually done in high school!
Can anyone say more about this? I always suspected this was so, because the end of high school / start of uni is roughly that sort of time period, and I always thought there was something about convergence/limits that was missing, it seemed very ad hoc.