Re: PHILOSOPHY: Self Awareness and Legal Protection

Hara Ra (
Wed, 22 Jan 1997 08:56:25 -0800

Kathryn Aegis wrote:
> Michael Bowling:
> >Only systems that are *concious* enough to grasp justice have a need for
> >it. If a beautiful or super-heavy system is smart enough to ask for
> >protection, then protect it.
> I sense a very large opening here, but I'll tiptoe around it and just
> ask the following: Does this criterion apply to 'babies'--systems
> that are too young to yet grasp abstract concepts of justice and yet
> can be predicted to develop into mature 'adult' systems?

Yah. At least three very difficult questions here:

1. Singularity Alert. If the putative AI is 'conscious', is it 'smarter'
than we are? Are its goals beyond our capacity for understanding or is
it misleading us? The point of Singularity is that there is no way for
us to know or judge that.

2. If we consider "babies" per the above, then the environment within
which these systems "grow" must be considered. Onward to concerns about
dysfunctional 'families', AI abuse, and AI 'cultural' values.

3. Broken AIs, just as broken human beings require care, often with
reduced or no human responsibility (and reduced rights which must be
handled by the caretakers). Sanatoria, mental hospitals, 'alzheimers for
AIs' all come to mind.

Obviously all those difficult moral choices we face apply as well to
'conscious' AIs. There's an old rubric from science fiction writing: All
aliens (or AIs) are human beings in disguise. This tendency (or
requirement) must be used with EXTREME caution.

| Hara Ra <> |
| Box 8334 Santa Cruz, CA 95061 |