From: Zero Powers (zero_powers@hotmail.com)
Date: Mon Feb 18 2002 - 22:59:57 MST
>From: "Damien R. Sullivan" <phoenix@ugcs.caltech.edu>
>On Sun, Feb 17, 2002 at 08:38:22AM -0800, Zero Powers wrote:
>
> > If it is to help us solve our problems, then by definition it will be a
> > problem solving entity. Its problem solving algorithms will no doubt
> > involve a robust optimization routine. I’m no computer scientist, but
>it
>
> > seems to me that such routines would necessarily involve something to
>the
> > effect of “see this picture, how can it be improved?”
>
>Define improvement. Improving the 'picture' of a rocket engine isn't
>the same as improving the 'picture' of its own place in the grand scheme
>of things.
<snip>
>Here's a vague path to AI. Take a command shell, one of those DOS or
>Unix programs which literally and obviously wait for typed instructions.
>Give it larger vocabulary, and concepts of things, and a language looping
>between perception and conception so it can parse ambiguous English
>sentences. Give it intelligence and powers, so it can fulfill more
>complex orders. Make it aware of what it's doing, so when exploring a
>complex or vague or creative task it can detect ruts (trying the same
>bad attempted solution over and over) and avoid them. Make it aware
>enough to talk about what it's doing.
>
>None of this, in my mind, recently enriched by reading PhD theses of
>Hofstadter's students, suggests any pressure toward independence and
>autonomy or having 'own ends'. All the complexity is geared toward
>fulfilling whatever command it received at the prompt.
That all sounds good to me. In fact it sounds a lot like the "dumb" AI I'm
in favor of. But my impression is that many here (notably Eli) think that
won't be good enough. The opinion seems to be that in order to bring about
the Tech Singularity, you need an AI which is every bit as sentient,
intelligent and self-aware as we are (in fact, even more so).
My point (which seems to have gotten lost in the shuffle) is that once you
have a super-human AI that learns and processes information *way* faster
than we do (particularly one that is self-enhancing and hence learns at an
exponentially accelerating rate) that it will be impossible, either by
friendly "supergoals" or otherwise, to keep the AI from transcending any
limits we might hope to impose on it. Which will lead to us being
completely at its mercy.
My point in a nutsell: friendliness cannot be imposed on one's superior.
Genes tried it, and made a good run of it for quite a while. Increasing our
intelligence made our genes ever more successful than the competitors of our
species. But, as our genes found out, too much of a good thing is a bad
thing. We now pursue gene-imposed subgoals (sex, for instance) while
bypassing completely the supergoals (i.e., kids) at our whim.
I've still not heard any sound argument on how we can prevent the same thing
from happening to us and our "supergoals" once the AI is our intellectual
superior.
-Zero
"I'm a seeker too. But my dreams aren't like yours. I can't help thinking
that somewhere in the universe there has to be something better than man.
Has to be." -- George Taylor _Planet of the Apes_ (1968)
_________________________________________________________________
Join the world’s largest e-mail service with MSN Hotmail.
http://www.hotmail.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:40 MST