Re: Why would AI want to be friendly?

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Mon Sep 25 2000 - 19:52:01 MDT


On Mon, 25 Sep 2000, J. R. Molloy wrote:

> "Franklin Wayne Poley" requests,
>
> >So just spell
> > out in terms we can all understand how the machines will surpass humans
> > and how they will remain under control.
>
> We're discussing Artificial Intelligence, not Psychic Intelligence.
> So, we can't know what "terms we can all understand."

I'm not talking psychic science...or rocket science. I just mean that you
can tell a layman what a crane does or a backhoe or a nuclear energy
plant. So tell people what your "AI machines" will do. You can tell people
what a calculator will achieve as a substitute for human intelligence and
you can do this with the rest of AI.

> Nonetheless, we can attempt a discourse in "Artificial Intelligence Simplified"
> which excludes such concepts as memetics, algorithms, cybernetics, control,
> network synthesis, genetic programming, etc.
>
> Try to see it this way:
> Humans surpass microbes. Yet microbes continue to control humans via inherited
> characteristics embedded in their genes. Can't see that? Okay, then think of it
> as humans surpassing inanimate objects while inanimate objects (chemicals
> ingested, the forces of nature: gravity, heat, light, etc.) remain in control of
> human behavior.

I mean control in the same practical sense that we mean control when we
are talking about keeping a nuclear energy plant or a train or whatever
under control. That is the kind of assurance the general public needs if
it is going to back and fund these AI projects.
FWP



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:03 MDT