Re: IA vs. AI was: longevity vs singularity

paul@i2.to
26 Jul 1999 12:36:09 -0700

On Sun, 25 July 1999, "Eliezer S. Yudkowsky" wrote:

> See, what *you* want is unrealistic
> because you want yourself to be the first one to upload, which excludes
> you from cooperation with more than a small group and limits your
> ability to rely on things like open-source projects and charitable
> foundations.

Since you seem to be advocating open-source for AI, why not an open-source movement for nanotech development as well? The advantages seem to heavily outweight the alternative of leaving it soley to competive yet highly secretive governements or corporations. This would seem to level the playing field and minimize said risks of nanotech preceding SI. What do you think?

> A-priori chance that you, personally, can be in the first 6 people to
> upload: 1e-9.

Agree.
> Extremely optimistic chance: 1%

Agree.

> Extremely pessimistic chance that AIs are benevolent: 10%

What do you consider a likely probability? And more importantly, could you elaborate on why you think Super-AI's are likely to be benevolent?

> Therefore it's 10 times better to concentrate on AI.

I suppose until you answer the above, I remain unconvinced.

Paul Hughes