Emlyn wrote:
>
> Eliezer wrote:
> > As for that odd scenario you posted earlier, curiosity - however necessary or
> > unnecessary to a functioning mind - is a perfectly reasonable subgoal of
> > Friendliness, and therefore doesn't *need* to have independent motive force.
>
> I'm not sure I understand how curiosity can be a subgoal for a seed ai; I'd
> love some more on that.
You need curiosity in order to think, learn, and discover, and you need to
think, learn, and discover in order to be more generally efficient at
manipulating reality, and being more generally efficient at manipulating
reality means you can be more efficiently Friendly.
> I read catai, and most of catai 2.0 (do you still call it that)?
It's up to 2.2 these days.
> But I can't
> remember some crucial things you said about goals/subgoals. Specifically, do
> you expect them to be strictly hierarchical, or is it a more general
> network, where if x is a (partial) subgoal of y, y can also be a (partial)
> subgoal of x?
"I'm not rightly sure what kind of thinking could lead to this confusion."
Goals and subgoals are thoughts, not source code. Subgoals, if they exist at
all, exist as regularities in plans - that is, certain states of the Universe
are reliably useful in achieving the real, desired states. Since "subgoals"
have no independent desirability - they are only useful way-stations along the
road to the final state - they can be hierarchical, or networked, or strange
loops, or arranged in whatever order you like; or rather, whatever order best
mirrors reality.
> Certainly, it strikes me that there ought to be multiple "top
> level" goals,
BLEAH!
> and they ought to come into conflict;
Quintuple bonus BLEAH!
> I don't think that one top level goal do the job.
Why on Earth not?
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:30 MDT