Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Mon Sep 25 2000 - 10:18:26 MDT


Eugene Leitl writes,

> You know, this also makes a pattern, and I don't like it. Yakking
> about the Big Cool Thing(tm) instead of doing it is sure more fun
> (been guilty of it myself), but this doesn't get the BCT done. It
> kinda makes one wonder whether there is anything behind that BCT thing
> at all, instead of Potyemkin's villages all the way down.

The question posed by this thread intends to discern whether the BCT *should* be
done. If the BCT (AI/SI) becomes a menace and/or destroys all human life, the
implied supposition is that it might be better to devise methods of containing
it (or forestalling it) before it is actually constructed or evolved through
genetic programming.

--J. R.
http://www.wkap.nl/sample.pdf?253707

"Big ideas are so hard to recognize, so fragile, so easy to kill.
Don't forget that, all of you who don't have them."
       --John Elliot, Jr.
[Amara Graps Collection]



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:54 MDT