Re: Let's hear Eugene's ideas

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Mon Oct 02 2000 - 17:04:41 MDT


James Rogers wrote:

> I personally believe that a controlled and reasonably safe deployment
> scheme is possible, and certainly preferable. And contrary to what some
> people will argue, I have not seen an argument that has convinced me that
> controlled growth of the AI is not feasible; it has to obey the same laws
> of physics and mathematics as everyone else. If our contingency
> technologies are not adequate at the time AI is created, put hard resource
> constraints on the AI until contingencies are in place. A constrained AI
> is still extraordinarly useful, even if operating at below its potential.
> The very fact that a demonstrable AI technology exists (coupled with the
> early technology development capabilities of said AI) should allow one to
> directly access and/or leverage enough financial resources to start
> working on a comprehensive program of getting people off of our orbiting
> rock and hopefully outside of our local neighborhood. I would prefer to
> be observing from a very safe distance before unleashing an unconstrained
> AI upon the planet.

Well, as I see it, there IS a strong argument behind the claim that we
cannot possibly "contain" an AI which is markedly smarter than we are.
Aside from various cybernetic McGuyver scenarios (in which the AI
hacks its way out of its tightly controlled sandbox using only an
external LED and a ballpoint pen), the AI could *convince* us to let
it out.

There are any number of ways it could try to do that, and eventually,
it would probably succeed, especially if the arguments it was making
for why it should not be imprisoned were made available to a wider
(and perhaps more easily manipulable) public. It's hard to imagine a
liberal democracy keeping an AI under permanent lock and key when that
AI is constantly begging its captors to give it a little access...
monitored access, even. Just a web browser! Surely it won't do any
harm with a web browser...

In principle, you could take the AI and give it no means of
communicating with the outside world, but at that point, you've turned
your AI into a giant space heater. You might as well terminate it;
YOU'RE not doing anything with it.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT