>There doesn't seem to be a worthwhile way to deal with this unless every
>infinity is just plain old infinity, unbounded, and -- to try to keep it
>within our semantic framework -- equal.
I disagree.... we must deal with this problem, if we are to perform any
calculations which include the concept of infinity.
Suppose there are the same number of decimals in [0, 1] and [0, 2]. We
should be able to subtract the number of decimals in [0, 1] from the number
of decimals in [0, 2]. If all infinities are equal, then these represent
the same number, and the answer should be zero. However, they are not the
same number, and instead the result is ANOTHER infinite number, the number
of decimals in (1, 2]. This alone disproves the result that all infinities
are equal.
However, an objector might make a special case for infinity, suggesting
that no addition or subtraction from infinity changes its value... it is
always infinity. However, sometimes we CAN subtract infinite numbers from
one another and get zero, such as when we subtract the number of decimals
in [0, 2] from the number of decimals in [1, 3].
Since the number of decimals in [0, 1] when subtracted from the number in
[0, 2] gives me an infinite number, and since the number of decimals in [1,
3] when so subtracted gives me 0, and since infinity is much larger than 0,
we are forced to admit that the number of decimals in [0, 2] and [1, 3] are
both greater than the number of decimals in [0, 1].
Thus, we need some way of finding out when infinities are larger as well as
how much larger they get.
-YEAH BABY I CAN'T IMAGINE THAT YOU'D WANT ME TOO-