Keeping AI at bay (was: How to help create a singularity)

From: Mitchell, Jerry (3337) (Jerry.Mitchell@esavio.com)
Date: Mon Apr 30 2001 - 09:57:32 MDT


Eugene Leitl wrote

<snip>
. Rather, don't. We would all die. A real AI
could clean ruin your day, by eating the world, with you on it. So don't.
It's that simple.
<snip>

I think a bootstrapping AI very well could eat the world if you just hand it
over nanotech that it can control although I dont think its certain. I think
the trick is to convince (demand) it to upgrade us to "super-Jupiter
brained" intelligence so we too can participate without getting eaten
ourselves. This should be pretty easy. If the AI wants us to do things for
it (like give it power, memory, upgrades, etc....), it better be churning
out the upgrade diagrams and procedures (cures for aging, cancer, biomind to
silicon mind downloading, etc...) for us. Then, and only then, when all the
humans (?) are at the same level as the AI, can we talk about nanotech and
macro-engineering the galaxy. I think an AI would even want to take this
approach. I personally think that morality is based on reason and logic, so
if "we" can start deriving the science of morality, an AI certainly should
come to the conclusion that killing intelligent beings should be avoided if
possible. Besides, what's the point of being an omnipotent,
super-intelligence discovering the secrets of the universe if there's noone
to share it with?

Jerry Mitchell



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT