Subgoals doesn't necessarily make sense when applied to an AI. Subgoals
are fundamentally heuristics, to help it achieve higher level goals.
We work with subgoals, because we aren't smart enough to consider the high
level goals directly. We want to eat and have things, so we earn money.
We want to earn money, so we go to work. We want to go to work, so we
get in the car. We want to get in the car, so we walk to the front door.
Etc. Breaking problems down in this way allows beings of our limited
intelligence to make progress in the world.
Whether AIs use subgoals or not is not important. What matters is that
they are trying to maximize their top-level goal. We can idealize the AI
behavior as saying, if I make a change to the world state to turn it from
A to B, which of A or B ranks higher in my top level goal? It needs to
be able to answer this question in order to choose what to do. Subgoals
may be a necessary heuristic in order to deal with the multiplicity of
possible actions, or it may turn out that some other mechanism is used.
But ideally, whatever the internal workings, the actual behavior of the
AI will be the same as if it did it the brute-force way, considering
all possible actions and choosing the one that maximizes its goal.
>From this perspective we can see that "make people happy" is far too
vague to be suitable as a top level goal. We need an algorithm we can
build into the machine which, given a potential state of the world,
returns a ranking for how desirable that state is. The AI's job is then
to do its best to change the world so as to maximize that state.
It's also clear from this that there needs to be only one top level goal.
If you have more than one, the AI would have built in contradictions.
The top level goal can have a composite structure (maximize A as long as
B doesn't fall below a certain minimim), but there needs to be just one.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT