Re: Sentience

Tony Belding (
Tue, 05 May 1998 09:40:52 -0600

On 28-Apr-98, Anders Sandberg wrote:

>Yes and no. They would not be true aliens, but they wouldn't be yet
>another form of humans (unless designed that way). Even a small
>difference in basic motivations would make them quite alien, and if
>they also have other differences (such as new forms of perception or
>no bodies) I think they would diverge quite quickly - while likely
>enriching humanity with a different point of view.

I think you're missing something, though. The human race has a whole history
stretching back thousands of years: language, traditions, symbolism,
technology, art and literature, philosophy, myths and legends, great
achievements and adventures, etc. This whole tremendous, interwoven body of
knowledge defines our cultural identity. A "small change in basic
motivations" can't provide an alternative to all that.

>> But, it now seems unlikely that our galaxy is teeming with aliens
>> just waiting for us to come and meet them. We need to create our own
>> aliens.

>Or become them.

Not easily. See above.

>> I would like to find an Earth-like planet somewhere, or maybe terraform a
>> planet, and create a race of beings to inhabit it.

>An interesting and somewhat cruel experiment. [ARISTOI SPOLIER ALERT!]
>This is what the villains in Walter John William's _Aristoi_ do; one
>of the more interesting questions in the book is the ethics of doing
>this - is it needless cruelty or giving life?

Gee, I guess now I'll have to read the book! I've had it sitting on my
bookshelf for ages.

Let's frame the question differently... Imagine you've *discovered* a
sentient race living in primitive conditions. Are you morally obligated to
give them advanced technology? Or should you be required to leave them alone
so they can develop their own identity? (RE: Prime Directive)

It's not an easy question. I've long felt the Prime Directive, as presented
in Star Trek, is a bunch of hogwash. On the other hand, it's easy to imagine
how a less advanced culture could be *obliterated* by wide open contact with a
highly advanced one. All the achievements and richness that they could have
achieved on their own is forfeit.

>> What happens when a highly sophisticated, non-sentient AI can pass
>> the Turing Test? How do you convince most people that a machine is
>> non-sentient when it can so convincingly /pretend/ to be?

>IMHO, if it quacks like a duck and walks like a duck, it is a duck or
>at least a good approximation. :-)

As I told someone else, the most obvious difference is that you can tell your
non-sentient AI to *stop* pretending to have human-like feelings, and it will
do so.

   Tony Belding