Re: Why would AI want to be friendly?

From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Sun Sep 24 2000 - 02:50:54 MDT


----- Original Message -----
From: "Darin Sunley" <rsunley@escape.ca>
To: <extropians@extropy.org>

> You cannot win a negotiation against something that quite rightly views
you
> as a deterministic process.

Woah, woah, woah, hang on a second here! That's a big word to be throwing
around with regards to the human computer! A deterministic process? We're
far enough up the sentience ladder that I think the answers on that are
still up for grabs.

In fact, you are talking about the most complex, emergent, self-organizing
adaptive system of which we are currently cognizant in the known universe--
human intelligence.

>
> I believe Eliezer is alluding to the the difficulty of identifying with a
> mind that is that much closer to a force_of_nature then any mere
unaugmented
> human ever can or ever will be.

-We- are a force of nature. We are nature emergent. We are the result of
the self organization of matter in the universe. We are nature looking at
itself.

This is the truth.

...And I have little doubt that higher intelligences -can- exist and they
will be mighty, and they may look upon us as bugs-- but I simply don't buy
that intelligence is a boring linear progression-- I simply don't buy that
AI will fail to look back at us and find us fascinating. We're -not- as
unto bugs-- we represent a tremendously significant turning point in the
pattern of existence. Frankly, failing to be interested in us is not a sign
of intelligence.

Hang on a second, I'm drifting dangerously. But I'm passionate about this,
damnit! :)

The point is simply that right now we are the best example of intelligence
around. In fact, we define the term. And we know for certain that true AI
will share one thing with us: intelligence. Otherwise we'd call them
something else.

We'll also share this: reality built us.

Hmm... it seems I may be coming around to Eliezer's model of thinking on a
particular level, while still strangely in disagreement: I think it is
likely that while we may not understand particular actions or thought
processes that AIs will have, we -will- be able to appreciate the reasons
for them. The supergoals. We can grok "I take these particular actions to
achieve a goal." We can grok: "I will act to maximize my survivability."
We can grok: "Take in information, make a decision based on that
information," all processes that are integral to intelligence. (Or again:
otherwise we should use another word to describe it.)

-These- commonalties are the interface by which we will connect with AI.
They will interface with us via our intelligence. I don't think they will
be totally alien to us-- on the contrary, I think we'll find them eerily
familiar, though continua of intelligence divide us. Like an old friend.
Like something that was always there, under the surface. And they will
understand us better than we understand ourselves. And, who knows, this
might make them care about us deeply. Best case scenario! (Frail, clever
little humans!)

--

::jason.joel.thompson:: ::founder::

www.wildghost.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:47 MDT