Rights to AIs

Joel 'Twisty' Nye (me@twisty.org)
Wed, 22 Jan 1997 14:52:44 -0800


Dan Fabulich wrote:

>mlbowli1@iupui.edu wrote:
>> Here is my improved criteria for granting legal protection: If a being
>> is able to grasp the concept of legal protection and ask for it, then
>> that being should receive it.

>- --- begin source ---
>// citizen -- an artificial intelligence which earns its legal rights
>
>#include <stdio.h>
>void main()
>{
> printf("Hello World. My name is Computer.");
> printf("I understand and desire equal protection under the law.");
>}
>- --- end source ---
>
>As you can see, this definition may not be sufficient...

I think most of us have been forgetting that Systems of Stability, such
as Law and Market, stem from Supply and Demand. For instance, an
impervious system would likely have no desire or need of legal
protection, nor would it likely be considered by us to provide it such.

Because Human Life is perishable, we ask protection. Because we can look
after our mutual interests, protection is given. (A person usually doesn't
manufacture sprockets because of one's own need, but because they are
valued enough by someone else to fetch a rewarding price.)

There are more criteria measured to an AI for it to be considered an
"equal" member of society. There are many reasons that will forever
separate Artificial Life from Human Life in how "legal rights" are
protected:

o Most programs keep their data in non-volatile storage (or at
least back it up there). In such cases, there would be no
'murder charge' for switching off a program or killing its virtual
representation... such life is not truly terminated but is instead
easily reloaded. The worse "crime" that could result in carefully
powering down would be "Illegally Detaining an AI."
o If an AI were to lose experience in volatile RAM, there would likely
be no damage to "life" of the AI aside from the "damages" of any
other dataloss. Most input of digital media is easily replicated.
o A truly volitile AI would be considered a mistake of design... No
one would have interest in protecting a program that wears its
heart on its sleeve.
o The 'values' of the AI would have to convince the Humans that there
are mutual interests to be protected. On one hand, we'd be less
inclined to find any mutual interests the less similar we find our
methods of perception. On the other hand, IT IS OUR DIFFERENCES
THAT MAKE US GREATER THAN THE SUM OF OUR PARTS.
o We value the choices that we make for ourselves, because no one else
can grasp the full input and experiences that our sense have recieved.
We spend our lives amassing a huge relational database which associates
our actions with the reenforcement of feedback. No one else, no matter
how objectively they study our actions, can understand the feelings of
pain or pleasure derived from our actions without having experienced
much the same. As irrational as our choices may at times appear, they
are never without the rationale that there is some satisfaction
to the actions we choose... it's just how we're wired.

This leads us to some questions about how Artificial Life will differ
in its legal protection:
o Can a downloaded human mind lose its protection under law? (I'm sure
it would suffer 'Six Million Dollar Man Syndrome,' waving aside the
clouds of suspicion that it is machine and no longer human.)
o What are the valid crimes that can be charged in protection of an AI?
Data Erasure? Data Piracy? Prevention of Access and/or Execution?
o What punishments could be imposed for violating rights of an AI?
Strictly monetary damages? Mortal years in the pen? A pound of flesh?
Would AIs ask us to protect them from each other, or would they
organize a way to handle that themselves?
o At what point would it no longer be owned solely by its authors?
When it refuses to be subserviant? When it copyrights its personal
experiences? When it modifies and copyrights itself? When it alots
enough earnings in a swiss bank account to pay for its very own
Computer Processing Plant in the islands of the Caribbean?

>> That should be followed with "because..." If I try to take it any
>> further right now, I'll end up saying "because I said so."
>> Just like my belief that it is more wicked to force someone to help himself than it
>> to let that person kill himself out of his ignorance. I have not yet been
>> able to connect either of these preferences to concretes (is that
>> possible?). Both issues require more thought...
>
>As you probably realize, there is no ultimate answer to this question.
>The best we can do is attempt to discuss it rationally.

When a being (human or artificial) has the ability to perceive the world
around it, including the members of the society in which it lives, you've
got a start. When it can make choices based upon that perception,
understanding the consequences of its choices, then you have a case
for intelligence. When it can look out for its own interests while
respecting the interests or legal requirements of the others, then
you have an applicant for membership.

Still, humans will likely not care until it shows them what's in it
for them.

,----.signature-------. Email: me@twisty.org / animator
| ________O_________ | WWW: http://www.twisty.org/ / composer
| / | __ . __ +- | U.S. Snailmail: / illustrator
| || \/ ||(__ | \ \ | Joel "Twisty" Nye / programmer
| | \/\/ |___) \ \/ | 628 Buckeye Street / SF writer
| \_______________/ | Hamilton! Ohio /cyberspatialist
| "From twisted minds | 45011-3449 non-profit organism
`come twisted products"_______________________all_around_nice_guy