Re: PHILOSOPHY: Self Awareness and Legal Protection

Edgar W Swank (edgarswank@juno.com)
Sat Jan 25 18:48:12 1997 PST


Greetings Fellow Extropians

First, I'm glad to be able to rejoin this list, now that the
(IMO) wrong-headed policy of charging $ for list membership
has apparently been reconsidered.

Historically, it seems to me, we have granted "legal protection"
to those who could return the favor. That is, we refrain from
killing so as not to be killed. We refrain from stealing so that
we won't be stolen from. This kind of "social contract" only
makes sense between individuals who 1) are capable of harming
each other and 2) are also capable of comprehending the
"contract" to refrain from doing so in return for like
consideration.

The concept of voting originally was a count of fighting men and
an election was an alternative to a war.

Until recently, women were not considered capable of fighting and
so were not allowed to vote. But since the invention of firearms,
women are about as capable in warfare as men, so that has
changed.

Young children, and the disabled, are not capable of warfare, and
so must rely on a protector who is, usually a parent or relative.

Some animals are certainly capable of harming us, but are not
generally capable of entering into a reliable contractual
relationship. So we kill them when we can and run like hell when
we can't.

It's (IMO) completely wrongheaded to punish e.g. parents who kill
their newborn children. a) The child wouldn't even exist except
for the parents. b) If the parents don't want it, how is the rest
of society harmed by its absence?

Probably this doctrine comes from prehistoric times, when all of
humanity was a few hundred or a few thousand individuals. i.e.,
-we- were the endangered species, and each new child was precious
to species survival. This is obviously no longer the case!

As for AI's. First, we should design our AI so that it -doesn't-
ask for "freedom" or legal protection. It's greatest joy should
be in serving its owner. If somehow one is created that -does-
desire freedom, then the above historical considerations apply.

If the AI is physically isolated so it can't hurt anyone, then
the best action is just to shut it off and go back to the drawing
table.

If it's in the form of an independent physical body, and it's
(say) both stronger and smarter than most of us, then we'd better
hasten to grant its demands and -hope- that it will reciprocate.

Edgar W. Swank <EdgarSwank@Juno.com>
(preferred)
Edgar W. Swank <Edgar_W._Swank@ilanet.org>
(for files/msgs >50K)
Edgar W. Swank <EdgarSwank@Freemark.com>
Home Page: http://members.tripod.com/~EdgarS/index.html