Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Thu Sep 28 2000 - 02:14:35 MDT


J. R. Molloy writes:

> Well, excuuuuuse me. I guess I missed the "SysOp" lecture. I must have been in

That's strange, because I recall Eli giving you pointers to it. And
you being gung-ho for friendly AIs it should give you lots of
arguments.

> South Dakota that day. So how does that relate to AI friendliness? The robot is
> mine because I bought it. Robots are robots because they behave robotically. So

How can you own something which is significantly smarter than you? It
owns you, not you it. Robots only behave robotically if they're so
primitive they're only good for making cars, and such. Any robot which
is going to be able to protect you from anything superhumanly smart
and fast is going to be flexible, and hence indistinguishable from the
enemy. It certainly may not act predictably, because that would make
it exploitable.

> they can't own humans. That robot came into being any way it could. It's a given

I fail to see the logic. You're describing a fantasy creature,
incredibly powerful as docile. Even djinns are that not.

> for the purpose of discussing why AI would want to be friendly. The question
> isn't how AI comes into being; it's why it would want to be friendly.
 
Unfortunately, the method which makes the AI powerful automatically
makes it less than friendly. If it's friendly, it's less than useful.

> > I'm beginning to think that you're rather good at trolling. No one
> > can't be that dense nondeliberately.
>
> Sad to learn you think that way. But I don't recall you proposing any
> constructive ideas about making AI friendly. So, to demonstrate that *you* are

Because I can't think of any. And I'm trying rather hard. Because so
much is at stake I'd rather have lots of iron-clad reasons why AIs
will wind up being friendly instead of the other way round.

> not trolling, please let us in on your formula for insuring that AI would want
> to be friendly. If you don't think that AI would want to be friendly, that
> suggests that you believe AI should not be attempted at all. In that case, how
> do you propose to keep AI from emerging?
  
> PS: "no one can't be that dense" is a double negative.

Good thing we're not arguing formal logic here.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:18 MDT