Re: Eugene's nuclear threat

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Mon Oct 02 2000 - 05:14:32 MDT


Samantha Atkins writes:

> Wrong. The entire attitude is based so much on nothing but fear rather
> than finding a positive approach to guide what is going to develop in
> any case that this very attitude is more poisonous than what you fear.

What can be possibly more poisonous than the end of the world as we
know it(tm) and death of all or most human beings?

> You (and whoever else you can persuade to such a view) stand to one side
> and act as if you can do the impossible. You act as if you can grade
> utterly what is safe and non-safe and keep the world safe by destroying

Of course not, and I hope I don't make too many mistakes on the wrong
side.

> all of what you see as non-safe. Yet you yourself have argued this does
> not work.
 
Relinquishment doesn't work sustainably, true. However, I don't
propose 1) relinquishment 2) sustainably, just culling of the most
dangerous branches using muscle (the old legislative/executive thing),
for a limited period of time while we're passing the bottleneck of
vulnerability.

> Those dead researchers were also a large part of the hope of humanity
> transcending this mudball. Thank you very much.

I want to achieve transcension by not dying, thankyouverymuch. A
sterilized mudball, or mudball turned into supermachinery, stripping
people in the process, that's not exactly my idea of progress.

> That is a little more immediate and more directly aimed at destruction.

Duh. Do you think I'm a monster?

> I would suggest though that growing a positive future scenario that
> gives bright babies something better to use their brains for than
> figuring out how to exercise their own particular fears is probably the
> most fruitful way to avoid or at least minimize such situations.
 
Sure, but no matter what you do, a few of the bright babies wound up
pathological individuals (blame evolution), and would put their
brightness to destroy themselves and us in the process.

To show you all how truly evil I am, I propose for an early screening
program, identifying such people (thankfully, brilliant sickos are
quite rare), and locking them up where they can't hurt themselves and
others.

> It is not too hard to think up a variety of ways to destroy and fuck up
> on a massive scale. It is much harder to create and especially hard to

There aren't that many self-constructed scenarious which end us
sustainably, at least with our current state of knowledge. A large
asteroid sneaking up, or a GRB in our neck of the woods would do, but
they're not man-made.

> create an ovearching vision that our science and technology is used to
> bring into reality and that can get buy-in from more than just us nerds.
 
Sounds like a good plan. But people don't buy into delayed
gratification, so it has to work in small increments.

> The best way to be reasonably sure that we won't create our own
> destroyer is for humanity to become the Power instead of trying to
> create something separate from them. Create SI as part of our own being

Sure, I'm all for it. Notice that the positive self-enhancement
autofeedback loop is still present. But in this case we start with a
human, so here we can assume conservation of compassion at least for
the few steps of the self-enhancement process, which will hopefully be
also somewhat less fulminant.

> and growing edge. Learn to care for and evolve the entire body instead
> of just certain parts of the head. Then we will have our best chance.
 
Since when did we wound up to be disembodied brains floating in the vats?

> I think that all of us together are as smart as we get and that we need
> to learn to work together really well if we are to make a difference.
 
Our firmware is not made to cooperate in large groups, and deal with
extreme threats. We're not smart and rational enough for that. If
there ever was a use for human germline engineering, it's to boost our
EQ and IQ.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT