Re: Why would AI want to be friendly?

From: Samantha Atkins (
Date: Sun Oct 01 2000 - 02:56:12 MDT

"Eliezer S. Yudkowsky" wrote:
> Emlyn wrote:
> >
> > But I can't
> > remember some crucial things you said about goals/subgoals. Specifically, do
> > you expect them to be strictly hierarchical, or is it a more general
> > network, where if x is a (partial) subgoal of y, y can also be a (partial)
> > subgoal of x?
> "I'm not rightly sure what kind of thinking could lead to this confusion."
> Goals and subgoals are thoughts, not source code. Subgoals, if they exist at
> all, exist as regularities in plans - that is, certain states of the Universe
> are reliably useful in achieving the real, desired states. Since "subgoals"
> have no independent desirability - they are only useful way-stations along the
> road to the final state - they can be hierarchical, or networked, or strange
> loops, or arranged in whatever order you like; or rather, whatever order best
> mirrors reality.

Goals and subgoals are logical entities, goal states. Calling them
thoughts seems to conflate mechanism, information flow, processing, with
purpose. Is this as it should be? Is a state a thought? The goal is a
state is a desired configuration of system/reality. Thus a goal is not
a thought.

> > Certainly, it strikes me that there ought to be multiple "top
> > level" goals,
> > and they ought to come into conflict;
> Quintuple bonus BLEAH!
> > I don't think that one top level goal do the job.
> Why on Earth not?

A top level goal of Friendliness would get most sentiences with such a
most important, top or central goal killed. Friendliness is usually a
subgoal to things like survival for most living things. Living things
are not hand crafted with a top level goal. We aren't sure whether such
crafting will take and hold and if it does, whether the result will be
viable or meet our notions of "friendliness" at all.

- samantha

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT