Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 27 2000 - 10:37:49 MDT


Samantha Atkins writes,

> Or you could start with humans and continuously augment (voluntarily)
> with first external but more and more integrated and then internal
> hardware and software. This seems to me the best way to keep humans in
> the loop and to end up with something human compatible and reasonably
> likely to be friendly and caring about humanity.

Great idea! (and very friendly)

I think you've hit on the core concept of why AI would want to be friendly:
AIs are us.

--J. R.

robotic flies (Robofly):
http://robotics.eecs.berkeley.edu/~ronf/mfi.html
http://malaysia.cnet.com/computers/gadgets/robotdog/
http://slashdot.org/articles/99/11/03/0721233.shtml

See also:
Spy Fly
http://www.sfgate.com/cgi-bin/article.cgi
f=/chronicle/archive/1999/11/02/MN51881.DTL&type=printable



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:14 MDT