Brent wrote:
> Won't it be great when we can make undesirable things like
> paper work (or anything else we'd like to do but now hate doing) be
> orgasmick or better? ;) Now that'll be true freedom! Imagine how
> much we'll all get done with such abilities to want what we want to
> want, and not what our creator hard wired us to want!
But our decisions to make these changes aren't "really" free. That is,
if freedom is the ability to change our wants and desires, we can't be
free by that definition, because only pre-existing wants and desires can
motivate us to make these changes.
So you want to make paper work be orgasmic? This isn't freedom, it is
merely one built-in goal being given more priority.
It was pointed out earlier that one way to think about this goal-meddling
is in the context of Minsky's "society of mind" model. In this model,
our minds are composed of multiple agents, cooperating and competing.
Each has its own skills and abilities, but also its own desires and
agenda. The reason we have conflicting goals and desires is because of
the multiplicity of agents. In some circumstances one has the upper hand,
and in other situations another does.
Changing goals then would be a matter of one agent momentarily being
dominant and being able to "lock in" its dominance.
Pragmatically, this may not be that great an idea, as presumably there
is some evolutionary reason for having this degree of mental diversity
which we experience as indecision and frustration. However we can perhaps
argue that modern conditions are sufficiently different from those under
which we evolved that we can justify giving some agents more power.
Still, looking at Brent's example, making paperwork that much fun may
seem like a good idea at the time, but we can easily imagine situations
where such Sphexlike behavior would turn out to be a mistake.
Philosophically, it hardly seems an advancement of freedom for one of your
subparts to become dominant over the others (except from that part's point
of view, I suppose). Rather, it is a change of mental architecture which
will be beneficial in some circumstances and arguably harmful in others.
The interesting question is whether AIs will be designed with a similar
mental organization. Will they be beset by the inconsistencies and
contradictions of our human minds with all their parts? Apparently it
was the best evolution could do. Can we do better?
Hal
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:29 MDT