Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 27 2000 - 12:04:29 MDT


Eugene Leitl writes,

> It would work in principle, provided in the meantime no AI grown by
> evolutionary algorithms emerges. (This is unlikely, because in 20-30
> years we should have molecular circuitry, and hence enough computing
> performance to breed an AI from scratch, while augmentation will have
> made scarcely any headway in that time frame). Because of explosive
> kinetics of the self-enhancing positive autofeedback process of the
> AI, the cyborg wannabees would be just as left in the dust as
> unaugmented people.

This seems to argue in favor of cyborg technology, if we don't want to trust AI
to be friendly.
Augmented Cyborgs could monitor friendly space to detect runaway evolutionary
algorithms.

--J. R.

Westworld remembered:
Two close friends decide to take a vacation together at Delos, a robot amusement
park. Arriving, they go to Westworld, one of the three interactive robot
populated environments available.
In the Old West environment, they "kill" a bad guy, get into a bar room brawl,
and even have sex with robo-hookers. Soon, however, the robots start to
malfunction.
When one of the friends is killed by a gunslinger robot, the other runs for his
life. The surviving friend ultimately destroys the gunslinger robot.
http://www.sciencefiction.com/film_t_dir/tn38.htm



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:15 MDT