AI rights

Joel 'Twisty' Nye (me@twisty.org)
Wed, 15 Jan 1997 13:54:58 -0800


Under the Subject of "Re: PHILOSOPHY: Make violent computergames illegal."
Michael Lorrey wrote:
>Lee Daniel Crocker wrote:
>>
>> I don't think he meant to imply that playing the game itself would be
>> immoral; that's just interacting with the sentient according to its
>> nature. But what about after the sentient game has been played for a
>> few hours, learned about its opponent and about itself, and is aware
>> of itself and what it has learned? Wouldn't turning the machine off
>> at that point be destroying a sentient being?
>
>If turning it off meant that it was dumping what it had learned,
>probably yes. If it was able to store what it had learned, then
>probably no.

As I see it, the goals of making an AI sentient are not too different
than the goals to identify the importance of Democracy (in a platonic
sense, as opposed to (mis)Representational Democracy):
o If we have eyes or ears with which to perceive the world, a mind
with which to interpret it, and (at some point in life if not
immediately) a motor skill or means of communcation with which
to act upon it, then no statement of another alone can invalidate
our perceptions.
o Even dogs and dolphins have become creatures whose perceptions
have contributed to the well-being of society.
o If an AI has no link to percieve the outside world, then it
contributes nothing to the real world. If it acts upon realworld
data, however, even if inaccurately modeled by programmers, then
it has potential to contribute something real in return. There
is nothing immoral about flipping the switch on an unreal AI
that you own. If, however, the AI can perceive more of the world
than its programmer, along with the causes and effects of the things
it perceives, then there is an argument that it should be seen as
a sentient lifeform.

There is increasing likeliness of AIs existing with "superhuman sensorium,"
considering the accelerated growth of the web. (Granted, some fleshbound
web.inhabitants seem to demonstrate Artificial Stupidity, but bear with me
on this just a moment.) The more webcams that go up, and the more databases
that become accessible, the more an AI can feast its learning upon a
semi-real world out there. Still, there should be a way for the AI to
improve its 'Real' Intelligence:

o As in human thinking, it should form its knowledge base twofold:
Learning to Associate Like-properties of items (Inductive Reasoning,
or Right Brain Thinking), and Learning to Discern Differences from
the logical chain of evidence (Deductive Reasoning, or Left Brain
Thinking).
NOTE: Like many humans, it should also (hopefully) grow to the
the ultimate discerment: Learning which differences make
no appreciable difference. Color and Gender have no bearing
on the validity of one's perceptions.
o It should have complete notes about the sources from which it gained
each item of knowledge. Thus, if it needs to assess the veracity of
a discovery, it can determine how much imperical evidence exists to
back its claim, or see which suppositions rightly invalidate a
hypothesis.
o While Effects witnessed from Direct-World inputs (webcams?
radiotelescopes?) are more irrefutable than human interperatation,
the AI should be able to measure an item's probable veracity
by according to the directness of experience in its input.
(For example, a live camera on a rock concert could teach it more
than a text report on the concert, which in turn teaches more than
some jerk who says "Van Halen sucks" without any indication that
the author has even heard one of their songs...)
o With an adequate knowledgebase of Causes and Effects, an AI can
forecast Consequences. ('...Having eaten of the Tree of the
Knowledge of Good and Evil, they have become as gods...')
Once it can weigh the Consequences to see how much their forecast
actions (or inactions) can benefit a preprogrammed goal ("To Serve
Humans" ?!), the AI can then decide its own actions.

It is at this point which we must decide if an AI can gain status of
citizenship... If it is truly intelligent, can an "owner" have any
right to invalidate its perceptions? Does it thus have our same
"Freedom of Speech"? Will it become a Ward of the State if its
owner can no longer supply a minimum requirement of processing
power?

NOTE: My apologies to those zoologists and medical professionals who
might object to my use of the term "AI" to mean Artificical Intelligence.
It is not intended to be confused for three-toed sloths, nor "Artificial
Insemination."

,----.signature-------. Email: me@twisty.org / animator
| ________O_________ | WWW: http://www.twisty.org/ / composer
| / | __ . __ +- | U.S. Snailmail: / illustrator
| || \/ ||(__ | \ \ | Joel "Twisty" Nye / programmer
| | \/\/ |___) \ \/ | 628 Buckeye Street / SF writer
| \_______________/ | Hamilton! Ohio /cyberspatialist
| "From twisted minds | 45011-3449 non-profit organism
`come twisted products"_______________________all_around_nice_guy