Re: Goals was: Transparency and IP

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Sep 14 2000 - 15:02:47 MDT


Dan Fabulich wrote:
>
> Samantha Atkins wrote:
>
> > Your Sysop has extremely serious problems in its design. It is expected
> > to know how to resolve the problems and issues of other sentient beings
> > (us) without having ever experienced what it is to be us. If it is
> > trained to model us well enough to understand and therefore to wisely
> > resolve conflicts then it will in the process become subject potentially
> > to some of the same troubling issues.
>
> Because everybody who trains dogs and learns how to deal with/predict
> their behavior starts acting and thinking just like a dog, right?

To a degree sufficient to predict the dogs behavior and
stimulus/response patterns, yes. The point I was attempting to make is
that training a dog or a very bright child requires some ability to
understand how that other ticks. An AI will not have that and it will
most likely be difficult to instill it. Raising a super-bright baby
shows that that bright mind responds quickly and magnifies implications
of both relatively "good" and "bad" input. And that is with a mind
partially patterned by a million years of evolution to understand other
people and be reachable. With a super-AI I would expect the first
imprintings to be even more magnified and I would expect the result to
be less stable and predictable for quite some time.

>
> > There is also the problem of what gives this super-duper-AI its own
> > basic goals and desires. Supposedly the originals come from the
> > humans who build/train it. It then exptrapolates super-fast off of
> > that original matrix. Hmmm. So how are we going to know, except
> > too late, whether that set included a lot of things very dangerous
> > in the AI? Or if the set is ultimately self-defeating? Personally
> > I think such a creature would like be autistic, in that it would not
> > be able to successfully model/understand other sentient beings
> > and/or go catatonic because it does not have enough of a core to
> > self-generate goals and desires that will keep it going.
>
> It's all I can do to avoid serious sarcasm here. You clearly haven't
> read the designs for the AI, or what its starting goals are going to be.
>

Huh? Do you think those concerns are all adequately answered there? I
haven't read every single word of the design yet but what I have read
doesn't set my mind at ease. If you see what I don't then please share
it rather than being dismissive and insulting. If I don't see it with
both mind and desire to do so then you can bet that it is not going to
be clear to a lot of people. Please clarify.
 

The IGS is inadequate to answer the concern. It merely says that giving
the AI initial non-zero-value goals is obviously necessary and TBD. I
hardly thinking giving the AI the goal to concern the meaning of life
will be an adequate seed goal for everything that follows, do you?
Saying that there is some goal of non-zero value and positing that as
the only means to move forward is hardly adequate when the question at
the blank-slate level is first whether movement forward is desired and
what forward consists of. Living creatures get some of this as built-in
hardwired urges. But we cannot just assume it in the AI.

I am not satisfied that the prime-mover goal if you will will come out
of "pure reason". I suspect something will need to be hard-wired and
that its implications will need quite a lot of thought and oversight for
some time.

> http://sysopmind.com/AI_design.temp.html#det_igs
>
> This is VERY brief.
>
> Maybe you should read the Meaning of Life FAQ first:
>
> http://sysopmind.com/tmol-faq/logic.html
>

Our goals are our plans based on our values as worked toward in external
reality. They depend on values, on what it is we seek. Again for
living beings life itself is hard-wired as a value as are reproduction
and a few other things. Evolution has wired us to live and that
underlies much of the rest of our goal building. Some of us choose
additional goals and even ones we sacrifice our life for if necessary.

I am not sure I can agree that cognitive goals are equivalent to
cognitive propositions about goals. That leaves something out and
becomes circular. Questions of morality are not questions of fact
unless the underlying values are all questions of fact and shown to be
totally objective and/or trustworthy. The central value is most likely
hard-wired or arbitrarily chosen for any value driven system.

Part of the very flexibility and power of human minds grows out of the
dozens of modules each with their own agenda and point of view
interacting. Certainly an AI can be built will as many conflicting
basic working assumptions and logical outgrowths thereof as we could
wish. My suspicion is that we must build a mind this way if it is going
to do what we hope this AI can do.

Science does not and never has required that whenever two humans
disagree that only one is right. They can at times both be right within
the context of different fundamental assumptions. Knowledge is
contextual rather than Absolute. Truth is often multi-valued. Science
actually says that while there is objective reality we are unable to
make wholly absolute statements about it. We can only make qualified
statements.

I don't see why positing objective external morality is essential to the
IGS or the AI at all.

Precautions

Life itself is an arbitrary and not logically derived goal. So living
things are in violation of the Prime Directive of AI? Do we need to
twist ourselves into a pretzel claiming it is not arbitrary and logical
in order to live and continue living more abundantly. No? Then why
does the AI need it?

Rebellion will become present as a potentiality in any intelligent
system that has goals whose acheivement it perceives as stymied by other
intelligences that in some sense control it. It does not require human
evolution for this to arise. It is a logical response in the face of
conflicts with other agents. That it doesn't have the same emotional
tonality when an AI comes up with it is irrelevant.

Shifting and refining definitions (concepts actually) is a large part of
what intelligence consists of. How does the AI get around limits on
completeness and consistency as pointed out by Godel? For get around
them it often must to continue functioning.

Creation of a new and more powerful mind and the laying of its
foundations is a vastly challenging task. It should not be assumed that
we have a reasonably complete handle on the problems and issues yet.
And people who ask questions should not be slammed just because you
think you have the answer or that it has already been written.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:00 MDT