RE: future pets

Tony Belding (
Fri, 24 Apr 1998 07:55:50 -0600

Hal Finney <> wrote:

HF> But won't this same technology also blur the distinction between
HF> people and the other categories?

Indeed. It's a problem.

HF> What happens when someone alters his DNA to make himself more
HF> like an animal?

I don't have any problem with that. DNA doesn't interest me much -- if all
goes as I hope, DNA will soon be mostly obsolete. (The transition from
DNA-based life to meme-based is a basic element of transhumanist theory, I

HF> Or what happens when he incorporates computer parts into his
HF> brain, or uploads and adds parts of his mentality to AI systems?

I don't see how this matters. The human personality is an information
pattern. The question is: what distinguishes an information pattern that we
morally /care/ about from one that we don't consider to be a person?

HF> I don't see how you are going to be able to draw the line you want
HF> to draw.

I don't either. It seems like we need to find some way, though.

I had an idea at one time, based on what I said before about an evolutionary
legacy. If a creature has evolved to have instincts and emotions for its own
survival and reproduction, and if it becomes sufficiently sophisticated to
demand rights, then it should have rights.

So, most animals would not qualify, since they aren't intelligent enough to
understand "rights" and ask for them. At the other extreme, you could have a
bush robot 100,000 times as intelligent as a human, but it's only programmed
from the ground up to obey its owner -- it has no "natural instincts" or
emotions for serving its own ends. That wouldn't be sentient either, though
it could be mightly useful to those of us who are.

If you took an animal and boosted its intelligence, then it could be a
sentient being, and thus accrue all the rights we normally recognize for
people. From a legal standpoint, this might simply be an exotic way of having

Also, if you created an artificial intelligence based on a neural net, which
is "programmed" by a regimen of stimulus and feedback instead of cut-and-dried
rules, then it *might* be sentient. This is one reason why I find neural-net
research vaguely distasteful. It could lead to some form of sentience, and I
feel this is not a high-priority goal. We already know how to create sentient
beings, we've got almost six billion of them. Do we really need more? Far
more useful to create intelligent but non-sentient servants, IMHO.

That was my idea. However, I've continued thinking about it, and I see that
there could still be some problems. Depending on how radical things get, we
could end up with entities splitting and recombining right and left in various
ways. Under such circumstances, it could be hard to keep track of such an
arbitrary definition of sentience, and the system could be subject to abuse.
There's not any obvious /test/ you could perform, short of digging into
their minds and seeing what makes them tick.

There's also the political obstacle -- I'm talking about a *legal* definition
of sentience. A legal definition isn't useful unless you can explain it to
enough people and convince them to accept it. I'm afraid my ideas don't
exactly have mass appeal in their current form. :-)

   Tony Belding