Re: Why would AI want to be friendly?

From: Emlyn (emlyn@one.net.au)
Date: Sat Sep 30 2000 - 20:21:40 MDT


Eliezer wrote:
> As for that odd scenario you posted earlier, curiosity - however necessary
or
> unnecessary to a functioning mind - is a perfectly reasonable subgoal of
> Friendliness, and therefore doesn't *need* to have independent motive
force.

I'm not sure I understand how curiosity can be a subgoal for a seed ai; I'd
love some more on that.

I read catai, and most of catai 2.0 (do you still call it that)? But I can't
remember some crucial things you said about goals/subgoals. Specifically, do
you expect them to be strictly hierarchical, or is it a more general
network, where if x is a (partial) subgoal of y, y can also be a (partial)
subgoal of x? Certainly, it strikes me that there ought to be multiple "top
level" goals, and they ought to come into conflict; I don't think that one
top level goal do the job. I suspect that might apply at any level of the
goal "hierarchy" (or network) on which you wish to focus. Is this your
thinking?

Emlyn



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:30 MDT