Re: Anticipatory backfire

From: Ken Clements (Ken@Innovation-On-Demand.com)
Date: Thu Nov 08 2001 - 10:51:37 MST


Mitchell Porter wrote:

> The AI example: this time one wants to be able to defend against
> the full range of possible hostile minds. In this case, making a
> simulation is making the thing itself, so if you must do so
> (rather than relying on theory to tell you, a priori, about a
> particular possible mind), it's important that it's trapped high
> in a tower of nested virtual worlds, rather than running at
> the physical 'ground level'. But as above, once the code for such
> an entity exists, it can in principle be implemented at ground
> level, which would give it freedom to act in the real world.
>
>

I believe the AI case is worse. If the AI is sufficiently smarter than you
are, the containment mentioned above is hopeless. The AI will simply give you
some very compelling reason to open the door, which it knows will get you,
having run a sufficiently close simulation of you. Cult leaders do this all
the time with promises of eternal life, endless delights in an after life,
etc. A super intelligent AI is going to make the most charismatic cult leader
look like a carnival barker by comparison.

The other thing is the "hostile mind" issue. It depends so much on view
point. My cats always put me in this category when I get out the carriers to
take them to the vet. I cannot convince them otherwise. What my dentist did
to me last month sure seemed hostile to me at the time, but I had to go on
some level of trust that what she was doing was actually good. There is no
way for us to know if an action taken by a super intelligence is good or bad
until we see the result (which is problematic in general, anyway, because the
final result of any action is not knowable in finite time).

-Ken



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:18 MDT