"D.den Otter" <firstname.lastname@example.org> wrote:
> Better ask some "neutral" third party (he's rather biased, you
> know, and of course so am I).
As a relatively neutral (if somewhat opinionated) third party, I'd have to say that this debate misses the point.
If Eli's seed AI scenario is possible at all, no choice we make can possibly prevent it from happening. The first SI will emerge decades before human uploading is possible, and the only real question is what it will evolve from. You only need around a TeraFLOPS for human-equivalent hardware, and we've built that already (the first PeraFLOPS machine is expected to be operational around 2006-2008).
If, OTOH, exponential self-improvement is not possible unless you are already well beyond human intelligence levels, neither the seed AI nor an upload is going to be an instant demigod. Instead we'll be stuck with several decades of gradual, society-wide augmentation and (probably) the eventual ascention of a significant fraction of the population. The major risk here is the instability of a society with immature nanotech, and the best insurance policy is to look for ways to reduce the risk of genocidal nanowars.
Billy Brown, MCSE+I