One source of confusion: Design vs. Use. Complex systems, whether
designed by human intention or evolution (not necessarily assuming
yet that they are distinct), are designed for a "purpose" in the
sense that there is a use or set of uses that caused them to come
into being (or remain alive, in the case of evolution). A human
designer created a wrench for the purpose of turning bolts. Humans
and other life forms alive today are alive because the effects our
genes had on the world caused them to replicate those genes more
successfully than others, so one can speak of the genes having
"designed" us for replicating them.
But once designed, a thing is what it is, not what it was "meant"
to be. The designer's job is done, and the user can now choose to
respect or ignore the design. A wrench is rigid and heavy because
that makes it better at its designed function of turning bolts, but
those features also make it well-suiting to cracking walnuts.
Congnitive and computational sytems are no different: assembly
language programmers know the trick of using a subroutine-return
instruction to implement a jump table by pushing an address from
the table on the stack and executing the return. The instruction
was "designed" for returning from subroutines, but having been
desigined, it now does what it does, not only what it was meant
to do, so there is no reason not to use it for making jump tables.
Studying evolutionary biology is useful in understanding what the
brain does by using its design to guide our research; but it would
be as foolish to constrain our actions by that design as it would
be to let walnuts rot when there's a perfectly good wrench around.
This causes confusion because the question "What's our purpose?"
means at least two completely different things: one is "For what
purpose or purposes were we designed?". Another is "Given the
results of that design (i.e., our abilities and the state of the
world as it exists today), what purpose or purposes would it be
most rational for me to pursue?"
I think evolution answers this first question adequately, until
some competing theory betters it. But I also think it is of small
use in answering the second. It may suggest what processes will
shape the future in which we act, and it helps us understand our
abilities, but it offers no guidance for our choice of goals.
I think, for example, that Eli is confusing "personal values"
with those things we are /designed/ to value, such as reproduction.
Given that definition, he is right to treat it as something that
is not necessarily a /useful/ value in the teleological sense.
What he calls "intrinsic" values are those values that can be
rationally shown to support his given goal (that goal itself may
or may not be rationally derivable).
I think what Eric means by the assertion that values are personal
is something else. Namely, that given a goal, actions and the
products of those actions can be rationally shown to support that
gaol to varying degrees based on how well they use resources to
support the goal. But each individual has different resources,
different capabilities. Even if the goal is agreed upon, different
people will have different values that support that goal: if the
goal, for example, is wealth, then I would quite rationally value
my legs less than, say, Jerry Rice, because I am a programmer. If
I came to work tomorrow in a wheelchair, my ability to earn wealth
would be affected little. If Jerry Rice did, his income would be
affected drastically. It is also the case that people differ in
their ability to reason and choose goals, but that is not necessary
to make values personal. It still might be that there is a single
rational goal for humans to pursue, and therefore a single rational
"intrinsic" value system for each individual to have in support of
that goal, but its intrinsic nature makes it no less personal.