Peter Passaro wrote:
> You'll have to excuse my naivete, I have only a cursory understanding of the
> Doomsday argument. It seems to me though that this argument can not be
> applied to living systems.
Au contraire, it applies *only* (at least directly) to living systems, or rather: intelligent systems.
> Because of their special status as self
> organizing systems they would seem to fall into another category altogether.
> It seems that most of the points made in the doomsday arguement can only be
> applied to objects which do not act on themselves.
Even if the Doomsday argument is right, we can improve our odds by taking action to minimize the risks. Indeed, the DA could make such actions seem even more justified.
> The argument (or a
> variation thereof) may actually suggest the opposite conclusion - that
> humanity and life itself may reach a point where the likelihood that they
> would ever be destroyed is next to nil.
This is actually the scary part. If there is such a ponit in the relatively near future, then the DA would suggest that there is a large probability that we will go extinct before we reach that point.
> The only way I can see it actually applying to the number of humans alive as
> a finite number is if humanity is superseded by another organism of its own
That *might* be a possibility that is not ruled out by the DA; it depends on the reference class problem, which has not been solved yet.
http://www.hedweb.com/nickb firstname.lastname@example.org Department of Philosophy, Logic and Scientific Method London School of Economics