Re: Risk vs. Payoff

From: Aleks Jakulin (
Date: Thu May 10 2001 - 13:42:35 MDT

> More seriously, you have an implicit assumption here that survival of the
> species, or survival of intelligent life, or some otherwise defined class,
> is our "supergoal" (to borrow some jargon). Many here would argue that this

I was referring to the most fundamental class, namely life. It's perhaps the
only one which is not abstract.

> is fundamentally at odds with extropianism, which has a more individualistic
> outlook... we would save ourselves, at the cost of the long term good of the
> species perhaps. Although it is not at all clear that this is a required
> tradeoff.

Individualism and selfishness are just other evolved heuristics. Can you
transcend your "naturally selected" impulses and comprehend with the mind alone?

> I think the point that you are wanting to make is that natural selection is
> more efficient than the alternative, whatever that may be. I don't think
> that's necessarily so. The idea of becoming transhuman is that we can shape
> ourselves as we see fit, and as is best to meet the challenges of our

We are already not subject solely to natural selection. For example, if you like
green eyes, you will choose a partner with green eyes, and a certain proportion
of your offspring will have green eyes. You don't have to wait for a random
mutation. You just pick it off the shelf, be that reason or impulse, and mix it
in your own genomic stew. It's Lamarckism, just that the iteration is a
generation, and that the scalpel is a bit blunt. Surely we'll fix that.

But my point is not to exemplify natural selection, as I have stressed before. I
agree with most of your objections to it. I'm just pointing out that design and
selection, without exploration, is dangerous.


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:04 MDT