
Originally Posted by
bipper
math is to be treated like a multidimensional array, not a continuation of numbers.
0 0123456789
1 0123456789
2 0123456789
3 0123456789
4 0123456789
--
aka
00 01 02 03 04 05 06 07 08 09
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
therefore taking one third of a cluster, leaves you will an inequality, and I would think, leave a remainder instead of simply being .999.... equaling 1. So to me, in my head .9999 != 1 outside of mathematical error and/or laziness. I mean, you have ten digits, it would be impossible to cut them in three ways. Where as a pie, you can. Therefore, pies are by far the most mathematically boggling, and awesome thing on this planet.
You couldn't ever initialize the array because you'd need another dimension for each integer exponent of 10 across (infinity,-infinity), and therefore could never use it. 
And that would seriously muck up exponents that aren't in the integers. 
So there's the proof that .999... = 1.
Code:
1/9 = .111...
=> 9*1/9 = 9*.111...
=> 9/9 = .999...
=> 1 = .999...
To assume the validity of this proof is to assume that the theorems and sets it is based upon are not fundamentally flawed. If you don't want to read a lot, skip the next paragraph. 
For example:
The most fundamental number set in use is the set of all positive integers, P. P was used until it was realised that there is a fundamental need for a zero-value digit.
From this, the natural numbers were constructed, which is N = { x | x in P or x in {0}} or in other words, x is a postive whole number or zero.
Then they realised that negative numbers were necessary, so the integers or Z were made. Z includes all positive integers, 0 and all negative integers.
But what if you don't want an integer? You can have one cake or two cakes or nothing, but not half a cake. So they made Q, the rational numbers, which are defined as { m/n | m,n in Z }, or any number that can be obtained by dividing two integers. This gives some numbers in between two whole numbers, but not all.
They realised that some numbers couldn't be represented by a decimal, numbers like the square root or 2, or Pi. There needed to be a continuous scales, so they made the real numbers, R, which includes all of Z and everything in between.
Then they realised that the Real numbers only operated in one dimension, positive and negative, so they created the complex numbers, C, which is a set of ordered pairs, extending in two dimensions. The real numbers can be visualised as a line in C, while C itself as a plane.
Each set is a superset of the preceeding one, or in other words each set contains all of the previous one and more.
Each of these number sets were created for one reason: the previous one was flawed.
You can imagine this theorem in the real numbers:
Code:
For any numbers m, n in R:
There exists some k such that |k| < |m - n|
What I'm saying there is that no matter how close m and n are, there is always a k that is closer to 0 than the difference between m and n.
For that reason, I can only conclude that there is a number between 0.999... and 1, and therefore that they are indeed different.
But wait a minute, 0.999... is infinite in length, so I can only conclude that it is that k.
But wait a minute, that's like saying |k| is the least positive number in R, even though R is infinite. If that's the case, what's half of |k|?
What I'm really trying to say is that the real numbers are flawed. This is just one example of how they break down under certain circumstances. With conversion between decimals and fractions rounding and approximation is inevitable, especially with irrationals, infinites and infinitesimals.
As far as I'm concerned, the two numbers are different, but they represent the same quantity.