Re: Why would AI want to be friendly?

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Tue Sep 05 2000 - 19:15:55 MDT


On Tue, 5 Sep 2000, Zero Powers wrote:

> >From: Barbara Lamar <shabrika@juno.com>
>
> >I myself can't see any reason for the human species to continue in
> >anything like its present form when (I should say, IF, recognizing the
> >uncertain and precarious nature of time travel [24 hours into the future
> >each day]) SI becomes reality. Is this sad? I'm a little surprised to
> >note that I don't find it particularly sad. It's more exciting than sad.
> > I'd be interested to know how others feel about the prospect of being
> >among the last members of the human species.
>
> Personally humanity does not matter to me. I'm concerned about experience
> and intelligence. Being able to ask meaningful questions and going out to
> find the answers to those questions. Call me a borg, an upload, a robot or
> an earthworm, if I'm sentient and can ask questions and answer them, I'm a
> happy camper. Personally, I'd rather do without any physicality at all
> (solves the speed of light problem).

I'd call you practical. I don't care HOW my calculator solves arithmetic
problems-only that it does so quickly, without error and at low cost.
So the question is how many OTHER problems of real human intelligence can
be simulated by artificial humanoid intelligence to meet "human
equivalency" criteria like the calculator.
FWP

-------------------------------------------------------------------------------
Machine Psychology:
               <http://users.uniserve.com/~culturex/Machine-Psychology.htm>
-------------------------------------------------------------------------------



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:15 MDT