Re: future pets

Anders Sandberg (
24 Apr 1998 17:35:02 +0200 (Tony Belding) writes:

> I had an idea at one time, based on what I said before about an evolutionary
> legacy. If a creature has evolved to have instincts and emotions for its own
> survival and reproduction, and if it becomes sufficiently sophisticated to
> demand rights, then it should have rights.

Sounds like a good model to me, although one likely has to interpret
"evolve" in a fairly wide sense.

> Also, if you created an artificial intelligence based on a neural net,
> which is "programmed" by a regimen of stimulus and feedback instead of
> cut-and-dried rules, then it *might* be sentient. This is one reason
> why I find neural-net research vaguely distasteful.

Interesting. I must admit I have also thought about this. I daily
create, train, use and erase neural networks in my research. Do they
have some kind of experience? Do they have a right to existence? As
for the nets I'm currently using (small fully connected
autoassociative networks with an incremental Bayesian learning rule),
I realize that their complexity is less than the chemical networks of
many bacteria, so I don't feel too bad about them. But this may become
a real problem in the future - the other graduate students here (me
included) are quite interested in creating a sentinent system if
possible, and once we start to get close to that, then we are going to
need to think much more about ethics.

> It could lead to
> some form of sentience, and I feel this is not a high-priority goal.
> We already know how to create sentient beings, we've got almost six
> billion of them. Do we really need more?

No, but we might need *different* kinds of entities. I think it would
be healthy if we humans weren't the only kind of intelligent entity in
society, the existence of other kinds of thinking and experiencing
might have profound and healthy effects on ourselves.

> Far more useful to create
> intelligent but non-sentient servants, IMHO.

Sure, for practical work. But sentinent beings are ends in themselves
in some sense.

> That was my idea. However, I've continued thinking about it, and I see that
> there could still be some problems. Depending on how radical things get, we
> could end up with entities splitting and recombining right and left in various
> ways. Under such circumstances, it could be hard to keep track of such an
> arbitrary definition of sentience, and the system could be subject to abuse.

This might not be a very strong objection, the basic idea might be
sound even if it is hard to implement in a society we could
imagine. It might be a good starting point for further examination.

> There's also the political obstacle -- I'm talking about a *legal* definition
> of sentience. A legal definition isn't useful unless you can explain it to
> enough people and convince them to accept it. I'm afraid my ideas don't
> exactly have mass appeal in their current form. :-)

Just you wait until the first AI is interviewed on CNN and starts to
quote Martin Luther King... :-)

Anders Sandberg                                      Towards Ascension!                  
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y