Goals was: Transparency and IP

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Thu Sep 14 2000 - 11:34:44 MDT


Samantha Atkins wrote:

> Your Sysop has extremely serious problems in its design. It is expected
> to know how to resolve the problems and issues of other sentient beings
> (us) without having ever experienced what it is to be us. If it is
> trained to model us well enough to understand and therefore to wisely
> resolve conflicts then it will in the process become subject potentially
> to some of the same troubling issues.

Because everybody who trains dogs and learns how to deal with/predict
their behavior starts acting and thinking just like a dog, right?

> There is also the problem of what gives this super-duper-AI its own
> basic goals and desires. Supposedly the originals come from the
> humans who build/train it. It then exptrapolates super-fast off of
> that original matrix. Hmmm. So how are we going to know, except
> too late, whether that set included a lot of things very dangerous
> in the AI? Or if the set is ultimately self-defeating? Personally
> I think such a creature would like be autistic, in that it would not
> be able to successfully model/understand other sentient beings
> and/or go catatonic because it does not have enough of a core to
> self-generate goals and desires that will keep it going.

It's all I can do to avoid serious sarcasm here. You clearly haven't
read the designs for the AI, or what its starting goals are going to be.

http://sysopmind.com/AI_design.temp.html#det_igs

This is VERY brief.

Maybe you should read the Meaning of Life FAQ first:

http://sysopmind.com/tmol-faq/logic.html

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:58 MDT