Re: Eugene's nuclear threat

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Oct 05 2000 - 03:38:50 MDT


Eugene Leitl wrote:
>
> Samantha Atkins writes:
>
> > Wrong. The entire attitude is based so much on nothing but fear rather
> > than finding a positive approach to guide what is going to develop in
> > any case that this very attitude is more poisonous than what you fear.
>
> What can be possibly more poisonous than the end of the world as we
> know it(tm) and death of all or most human beings?

I have been positioning every thing I said against the likelihood for
all or most humans dying. It is precisely this I want to avoid.
Perhaps there was some mis-communication. But the world as we know it
is in fact history. There is no way that world will stick around except
in a VR. We are in for much too much change much too quickly for
anything like the world today (in most respects) to be likely to stick
around for long. But that doesn't mean the ecology gets ripped (it is
more likely to flourish long term) or that all humans die. They most
likely will become more and more augmented and/or upload but they
needn't die.

The most interesting period for me is the one between now and the first
SI or nanotech or uploading, all the stuff we need to get right in order
to get there without such massive breakdowns and freakouts that we do
something really stupid. More positively I believe we need some really
good memetic engineering to set up a positive vision of the future and
driver for ethics that actually work to make the world a better, happier
and vastly more productive place (and yes, friendlier) place than it is
today.

>
> > You (and whoever else you can persuade to such a view) stand to one side
> > and act as if you can do the impossible. You act as if you can grade
> > utterly what is safe and non-safe and keep the world safe by destroying
>
> Of course not, and I hope I don't make too many mistakes on the wrong
> side.
>
> > all of what you see as non-safe. Yet you yourself have argued this does
> > not work.
>
> Relinquishment doesn't work sustainably, true. However, I don't
> propose 1) relinquishment 2) sustainably, just culling of the most
> dangerous branches using muscle (the old legislative/executive thing),
> for a limited period of time while we're passing the bottleneck of
> vulnerability.
>

Without a vision of where we're going that enrolls most researchers and
a lot of the rest of us I don't think you can get to where you want to
go. The business of culling without clear understanding of where we
want to go and without popular buy-in is itself quite dangerous to our
future (and present).

I'm not sure you can define a delimited period of time that we are
vulnerable. It seems to me we would be very vulnerable up to the point
where we're all backed up fully in as failsafe a way as is imaginable.

 
> > Those dead researchers were also a large part of the hope of humanity
> > transcending this mudball. Thank you very much.
>
> I want to achieve transcension by not dying, thankyouverymuch. A
> sterilized mudball, or mudball turned into supermachinery, stripping
> people in the process, that's not exactly my idea of progress.
>

You have me confused with someone else. I don't plan on any of those
scenarios.
 
> > That is a little more immediate and more directly aimed at destruction.
>
> Duh. Do you think I'm a monster?
>

Nope. I think you, like most of us, can get frightened (rightfully so)
and can reach conclusions that aren't really likely to get you what you
want and have really nasty consequences of their own.
 
> > I would suggest though that growing a positive future scenario that
> > gives bright babies something better to use their brains for than
> > figuring out how to exercise their own particular fears is probably the
> > most fruitful way to avoid or at least minimize such situations.
>
> Sure, but no matter what you do, a few of the bright babies wound up
> pathological individuals (blame evolution), and would put their
> brightness to destroy themselves and us in the process.
>

Yeah. But you can at least make it a lot less likely by providing more
positive channels. It also helps to have enough research openness for
the dangerous channels to get recognized early on (as well as the
near-psychotic researcher here and there).

I'm not saying we don't have any safeguards. But we should exhaust
positive means first and be very careful what kinds of negative
safeguards we propose and implement.
 
> To show you all how truly evil I am, I propose for an early screening
> program, identifying such people (thankfully, brilliant sickos are
> quite rare), and locking them up where they can't hurt themselves and
> others.
>

That simply will not work. Non-sick people sometimes make decisions
that are extremely dangerous and have very negative unforseen
consequences. The occassional psychopath is not the main worry as I see
it.

 
> > It is not too hard to think up a variety of ways to destroy and fuck up
> > on a massive scale. It is much harder to create and especially hard to
>
> There aren't that many self-constructed scenarious which end us
> sustainably, at least with our current state of knowledge. A large
> asteroid sneaking up, or a GRB in our neck of the woods would do, but
> they're not man-made.
>
> > create an ovearching vision that our science and technology is used to
> > bring into reality and that can get buy-in from more than just us nerds.
>
> Sounds like a good plan. But people don't buy into delayed
> gratification, so it has to work in small increments.
>

It depends on how it is sold. Memes have been sold that delayed
gratification for generations (rightly or wrongly). The former Soviet
Union is a case in point. It wasn't a very healthy meme but it shows
that long-term goals can be sold well enough to counteract short-range
thinking. But fortunately, things that benefit us short-term can be
aligned with policies and long term goals in many cases. One of the
hardest jobs of meme hacking is to get people to really see more
globally and more at a system level. It goes against a lot of
conditioning. In the West we also have a lot of anti-intellectualism
and anti-science/technology memes to counteract and replace with
something more healthy.
 
> > The best way to be reasonably sure that we won't create our own
> > destroyer is for humanity to become the Power instead of trying to
> > create something separate from them. Create SI as part of our own being
>
> Sure, I'm all for it. Notice that the positive self-enhancement
> autofeedback loop is still present. But in this case we start with a
> human, so here we can assume conservation of compassion at least for
> the few steps of the self-enhancement process, which will hopefully be
> also somewhat less fulminant.
>
> > and growing edge. Learn to care for and evolve the entire body instead
> > of just certain parts of the head. Then we will have our best chance.
>
> Since when did we wound up to be disembodied brains floating in the vats?
>

We sometimes act like such. We get way out into the tech and into the
wondrous future Mind that will be build and forget our roots, forget our
dreams, forget the real world and real people we are now that will
somehow have to be what we build from. Too often we lay aside our
dreams of better, of better and happier life for ourselves and others
and get a bit lost in a hyper head trip. But we can sell just the head
thing. If you get the masses to grok the head thing at all they are
likely to be marching in the streets with torches because that story
doesn't say how they can take care of themselves and their kids and have
a viable and compelling future.
 
> > I think that all of us together are as smart as we get and that we need
> > to learn to work together really well if we are to make a difference.
>
> Our firmware is not made to cooperate in large groups, and deal with
> extreme threats. We're not smart and rational enough for that. If
> there ever was a use for human germline engineering, it's to boost our
> EQ and IQ.

Then we better learn to re-write and work around the firmware because we
haven't got a lot of choice if we wish to survive. We don't have space
to wait for germline engineering. It takes some powerful and compelling
memes to enable and sustain the types of efforts needed. I don't know
in detail what those memes look like but I think coming up with them and
effectively planting them is crucial to our success as a species.

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT