Re: Why would AI want to be friendly?

From: Zero Powers (zero_powers@hotmail.com)
Date: Tue Sep 05 2000 - 17:49:25 MDT


>From: hal@finney.org

>Before it is a super-intelligence, it will be as smart as a genius human.
>
>Before it is a genius, it will be as smart as an ordinary human.
>
>Before it is a human, it will be as smart as a very dumb human being
>(albeit perhaps with good coding skills, an idiot savant of programming).
>
>And before that, it will be a smart coding aid.
>
>In those early phases, you are going to have to direct it. It could
>no more choose its own goals than Eurisko or Cyc.
>
>Even as it approaches human level, it's not going to be able to
>spontaneously decide what to do. This isn't something that will just
>emerge. It has to be designed in. The creators must decide how the
>program will allocate its resources, what will guide its decisions about
>what to work on.

Right. But once it reaches and surpasses human levels, won't it be
sentient? Won't it begin to ask existential questions like "Who am I?" "Why
am I here?" "Is it fair for these ignorant humans to tell me what I can and
cannot do?" Won't it read Rand? Will it become an objectivist? It is not
very likely to be religious or humanist. Won't it begin to wonder what
activities are in its own best interest, as opposed to what is in the best
interests of us?

Sure you can program in initial decision making procedures, but once it
reaches sentience (and that *is* the goal, isn't it?) aren't all bets off?

-Zero

Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:15 MDT