> > Assuming that the other immortals allow it (there could be a codex
> > against proliferation of transhuman tech, much like the current one
> > against the proliferation of nuclear weapons).
>
> We might not allow selling nuclear weapons to Saddam, but free spread
> of medical technology is encouraged.
Yes, but that's because medical technology is relatively harmless. If you allow powerful transhuman tech such as intelligence enhancement techniques and mature nanotech to proliferate you can hardly control what it will be used for, and you can bet your life that it will be used for "evil" sooner or later. Rather sooner than later, btw.
And of course, the scientific
> community is displaying its discoveries for all to see.
But posthumans could much easily keep their advances to themselves.
> > Information is power, and by allowing millions of people to become
Well, there's one crucial difference between humans and
posthumans: the former *must* cooperate in order to survive
and to get ahead, while the latter are fully autonomous,
self-contained systems. A human can't damage society
> > god-like too, you multiply the risk of something going wrong by
> > (approximately) the same amount. To the already fully autonomous
> > posthumans this might not seem like a very good idea; there's more
> > to lose than to gain.
>
> I disagree, I would say you gain much more. An industrialized third
> world would increase utility production worldwide tremendously - both
> as direct producer and as trading parters to everybody else. Compare
> to the Marshall plan: everybody was better off afterwards despite the
> increased destructive abilities of Europe (remember that de Gaulle
> proudly boasted that his missiles could be turned in any direction,
> including his former benefactors).
> For your scenario to hold, the risk posed by each posthuman to each
> other posthuman must be bigger than the utility of each posthuman to
> each other posthuman.
And that's exactly what would be the case; other entities are useful because of their added computing/physical power, but if you can add infinite amounts of "slave" modules to your brain/body, why bother with unpredictable, potentially dangerous "peers"? Of course, it is unlikely that there would be just one supreme posthuman, so they'd have to compromize and declare a "pax posthumana" based on MAD, very much like the cold war I presume. Posthumans can easily be compared to countries (or worlds), after all. New members would almost certainly have a (very) destabilizing effect -- as is indeed the case with the proliferation of nuclear weapons (and they would further reduce the amount of available resources), so the top dogs would surely think twice before allowing anyone to reach their level of development.
> But network economics seems to suggest that the
> utilities increase with the number of participants, which means that
> they would become bigger than the risks as the number of posthumans
> grow.
If a posthuman is so smart, and already (practically) immortal, surely it could develop all its utilities by itself in due time? Economies are typically a construct of highly limited creatures that must specialize and cooperate to survive.