Re: >H ART: The Truman Show

Michael Nielsen (mnielsen@tangelo.phys.unm.edu)
Mon, 22 Jun 1998 17:27:54 -0600 (MDT)


On Mon, 22 Jun 1998, den Otter wrote:

> Michael Nielsen wrote:
>
> > Is it ethical to contain an AI in a limited world? This is an especially
> > interesting question if one takes the point of view that the most likely
> > path to Artificial Intelligence is an approach based on evolutionary
> > programming.
> >
> > Is it ethical to broadcast details of an AI's "life" to other
> > researchers or interested parties?
> >
> > Is it ethical to profit from the actions of an AI?
>
> Since AIs will presumably be made without emotions, or at least with
> a much more limited number of emotions than humans, you don't have
> to worry about their "feelings".

For the record, I may as well note that I think this is a highly
questionable assumption. On what do you base it?

One thing I don't doubt is that AIs will exhibit occasionally strange
behaviour. Even "rational" behaviour (whatever that means) is
surprisingly subjective, depending as it does upon what information is
avaliable. Small variations in the available information can have a
large impact on behaviour, even if that behaviour is governed by a small
number of rigidly adhered-to rules.

In turn, the available information varies quite a bit from intelligence
to intelligence, as does the available resources which can be devoted to
analysis.

One final questgion, before moving on to your next comment: Upon what do
we base our values, if not some form of emotional / irrational
attachment? It is certainly advantageous to have a reasonably strongly
held value system; apathy and inaction is the alternative. Emotions
seem to be a key factor in maintaining such value systems.

> Also, one of the first things you
> would ask an AI is to develop uploading & computer-neuron interfaces,
> so that you can make the AI's intelligence part of your own. This would
> pretty much solve the whole "rights problem" (which is largely
> artificial anyway),

What do you mean, the rights problem is "artificial"?

> since you don't grant rights to specific parts
> of your brain. A failure to integrate with the AIs asap would
> undoubtedly result in AI domination, and human extinction.

This seems to be an unjustified assumption. All other forms of life in the
world haven't died off with the coming of human beings. Some of our near
relatives amongst the primates are still doing okay.

> P.s: I'm almost certain that our ethics will become obsolete with
> the rise of SI, they are simply too much shaped by our specific
> evolution etc.

This may be a tip as to why an SI may share our ethics: their
evolutionary path includes us. It depends upon how fast their own
evolution continues.

Michael Nielsen

http://wwwcas.phys.unm.edu/~mnielsen/index.html