From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Fri Apr 18 2003 - 09:46:51 MDT
On Fri, 18 Apr 2003, Rafal Smigrodzki wrote:
> ### Anthropomorphism. The intelligence of a thermostat does not need to
> correlate with its friendliness.
Oh my god -- now I've got to go upstairs and take a large hammer
and destroy my not-so-intelligent (15 year old) thermostat that
can only change the temperature twice a day and cannot detect
whether or not I am home (and therefore need heat), a clearly
unfriendly (or at least costly) behavior...
:-?
> > Admittedly these arent very strong arguments, but then I fail to see
> > any better arguments that say we should fear an unfriendly AI
> > erupting from our computers.
>
> ### I do.
So do I. As I believe recent work coming out of the Foresight Institute,
perhaps Christine Peterson's testimony before the House Science Committee,
indicates a *lot* of work will need to be done on how one gets the
greater intelligences to co-exist with the lesser intelligences.
This relates to the very long discussions we have had over the last
several years on whether or not it is ok to terminate or suspend
the "run time" of your copies once uploaded.
I suspect there isn't much distinction between an unfriendly AI
and an uploaded copy that knows it is going to be "terminated".
Robert
This archive was generated by hypermail 2.1.5 : Fri Apr 18 2003 - 09:54:15 MDT