On Fri, 10 Jan 1997, Max M wrote:
> Artifical life might be created by running a simulation based on DNA (Like
> the recent WIRED article). Maybe some kind of life will evolve by itself.
> All common stuff.
> If we upload shurely we want to be regarded as legal entities in that form
> with the right to make money, buy stuff, take someone to court etc. It will
> then be natural that all artificial lifeforms be regarded as such. I mean
> if life can exist in a computersystem it can take any kind of shape.
>
> In a game like Quake from ID-Software you can play against "bot's".
> Computer based adviseries made with Neural nets, fuzzy logic etc. At some
> time computergame characters will be as intelligent as other artificial
> lifeforms.
>
> won't that make computergames against Bots/AI's illegal?
> >
How about instead of killing them, they just lay down, cover their eyes
and count to 50. I don't think its necessary to kill game bots to have a
violent game. For instance, Hollywood has become very adept at making
violent movies without actually killing anyone.
Playing "life or death" games with humans would probably be a pretty cool
job for a Bot or AI, as long as both parties could walk away from the
carnage carnival (great name for a game, eh!).
To address the meat of your question, I say self awareness in any being
presupposes and necessitates legal rights and protection. But how can
self awareness be measured in an AI? (Maybe they can be equiped with
genitals: If such a being plays with its virtual self when looking in a
mirror, then we'll know! :) )
I expect that when the time comes protect artificial beings, the issue
will be quite controversial. Many people will be unwilling/inable to
accept that something without a face is alive. I predict members of the
pro-life camp will be especially inclined to this troubling opionion.
Exovivo!
Michael Bowling
mlbowli1@cord.iupui.edu