Andrew Lias has written,
> I've been following the debates regarding the possibilities of friendly vs.
> unfriendly AI and I have a question. It seems that we are presuming that a
> friendly AI would be friendly towards us in a manner that we would recognize
> as friendly. Indeed, what, precisely, do we mean by friendly?
Good question. I think everyone has their own idea of what friendly means. Some
theorists may speculate that the friendliest thing AI could do for us would be
to round up all the anti-AI fascists and homicidal fundies and upload them into
a video game (where they belong, after all).
> Let us (to invoke a favored cliche) suppose that an AI is evolved such that
> it's understanding of being friendly towards humans is that it should try to
> insure the survival of humanity and that it should attempt to maximize our
> happiness. What is to prevent it from deciding that the best way to
> accomplish those goals is to short circuit our manifestly self-destructive
> sense of intelligence and to re-wire our brains so that we are incapable of
> being anything but deleriously happy at all times? [1]
More likely it would explain the benefits of meditation, and leave any obstinate
egoists in the dust while installing the enlightened in paradisiacal
immortality.
> Now, I'm not suggesting that *this* is a plausible example (as noted, it's
> very much a science-fiction cliche), but I am concerned that any definition
> or set of parameters we develop for the evolution of friendly AI may include
> unforseen consequences in the definition that we simply can't anticipate at
> our level of intelligence -- and that's supposing that the SI will still
> want to be friendly.
>
> What am I missing?
Unforeseen consequences of AI will not be unforeseen for AI cyborgs, but only
for organically unintelligent power brokers. To get to a Type I, II, and III
civilization, we'll definitely need AI. The most important job for AI (a seventh
generation expert system) will be to bypass the efforts of control freaks who
want to outlaw AI.
--J. R.
"One trend that bothers me is the glorification of stupidity, that it's
all right not to know anything."
--Carl Sagan
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:29 MDT