From: Bryan Moss (bryan.moss@dsl.pipex.com)
Date: Thu Jul 17 2003 - 16:07:19 MDT
Harvey Newstrom wrote:
> This is the most important point that I think a few people have been
> complaining about. Some people have gotten so optimistic that they
> become inactive. Some people have literally argued against a Pluto
> mission, claiming that Pluto will be turned into computronium before we
> get there. Some people have literally argued against cryonics saying we
> will be immortal before we get old. Some people have literally argued
> against saving for retirement say that that the singularity will upload
> us before we retire. The sad truth is that many people are so confident
> about the future that they won't lift a finger to do anything. It turns
> into a faith-based religion where real work is worthless and just
> sitting back and doing nothing is the wisest course of action.
Right. I agree completely. I just resubscribed here after some 18 months
(IIRC). The last time I was active here I became rather disillusioned and
over the last year or so I can honestly say I haven't so much as thought
about extropy or transhumanism. I came back for some nostalgia.
I think part of the problem is that we need to bracket a certain number of
"ultratechnologies," the singularities in our equations, these being
superintelligence, uploading, and drexlerian nanotechnology. These three
cast a vast, ominous shadow over any attempt at practical action. It's
fairly easy to call them into question: the status of something called
"intelligence" that can be Supersized like so many Happy Meals, the myth of
substrate independence, understating the complexities of system design.
Extropy is a child of the computer age, so it's little wonder that these
three embody it so profoundly, and not only through their crippling
optimism. "Intelligence," for example, the idea that computers are
"reasoning machines," better than human, more rational. Moore's Law, a
fundamental adage of our philosophy and of computer science, despite being a
marketing technique of a particular company that decided to reduce component
size and increase speeds, design be damned. Tape them together and you have
Supersized Intelligence: superintelligence. The substrate, another part of
computer age mythos: abstractionism. Of course, there's no physical theory
for uploading, except perhaps, if you might allow, this short, utterly
erroneous argument: (1) at the quantum level reality is discrete; (2)
therefore, a quantum computer can simulate any part of reality; (3) the
brain probably isn't capable of exploiting quantum mechanical effects; (4)
therefore, a classical computer can simulate a brain that is identical to
and identifiable with the original. This, at least, is what I can salvage
from my side of those copy arguments we used to have. (I concede.) And
finally, we shovel all the real problems under that carpet we call
"software." This is part of that larger Myth of the Computer Age:
universality. We can do anything in software, given enough speed. This is
not true in any practical or useful sense, however.
I say the following with complete confidence: there will be no Yudkowskian
Singularity, the copy is not the original, the creation of the first
assembler will not cause an immediate revolution in manufacturing. These
are science fiction pipedreams. They're not even very good ones. Further,
we need to "deconstruct" our relation to the computer revolution. We're on
the other side now, I mean this in complete seriousness, the computer
revolution is played out. All that is left is for computers to recede; not
in the hip, ubiquitous technology "computer in my doorknob" sense but in the
"everybody stopped caring" sense. This may mean they'll take different
shapes. But that's it, that's your revolution. Now it's time to look back
and ask ourselves what was real and what was hype. A lot of it was hype.
But that's our origin and we need to pick it apart to understand where we
came from. Artificial Intelligence, of the CS kind, of the kind that
assumes we can design Minds (not brains) through some sort of hokey
self-reflection, is the sort of hubris we must now only find humour in.
(Which is not to say computer simulation won't play a big role in the brain
sciences or any other science, but it's a tool now, nothing more.)
Even if you bracket the three "ultratechnologies" I mentioned only as a
thought exercise, it's interesting to see how the horizon changes. Without
superintelligence, without the technological Saviour-God, there is no wall
over which we cannot see. Without uploading, we're going to die unless we
fight for it. Curing aging is only a first (incredibly difficult) step, the
way we value our lives will have to change, the medical practise will have
to change. Nobody wants to live to 400 and slip in the bath, crack their
head open on the faucet. It's an entirely different attitude towards death
and we have to sell it to the world. Without drexlerian nanontechnology
(and I speak more of the supposed time frame than the technology itself)
there is no sudden "fix" for the poor, the starving. We need to engineer
crops, educate people, provide clean water. None of this is going to be
easy. We're not going to get off-world soon either, so, yes, we're stuck
here amidst the war, the famine, those evil fundamentalists.
> People who complain about our slow progress, question whether things
> will work, point out flaws in existing plans, etc., are the real heroes
> of tomorrow. They are the engineers of the future. People who don't
> know enough technology to see the flaws, or who are so optimistic that
> they don't see any need to address the flaws, are the people who are
> delaying progress. Dynamic Optimism was never intended to be a
> faith-based position. We were supposed to be optimistic that everything
> was possible so that we would continue working toward a solution while
> others had long since given up. Optimism should be an excuse to work
> harder for the future, not an excuse to sit back and do nothing.
Yes, and as well as realising that this stuff requires hard work, we need
a *critical* approach to technology. We need to take our heads out of the
sand, lose the ridiculous "luddite" talk, and realise that, yes, technology
does effect peoples lives, and that, no, not all technology effects all
lives in a positive way. Technophilia doesn't cut it. Technology is
ideological in the strongest sense. The telephone has something to say
about personal space, personal time, about availability, about distance, it
embodies certain attitudes towards these things. Technology is not neutral.
It meets the world in the form of products or govenment programmes; if
science has a claim to neutrality, its realisation in technology has long
since lost it. And we must always keep in mind that science only makes a
*claim* to neutrality: universality is the target of science, not its
immediate achievement. We can be critical of science, we must be. We must
be able to be critical of some research, some applications of science, some
technologies in order to make a fair argument for others. To take a
specific example: with genetically modified foods our fight is to move the
field of battle from the general, from the sweeping accusation, to the
specific. We have to acknowledge that, yes, there are some negative uses
here. However, the mistake we don't want to make is the focus on the
catastrophic. That's the mistake Foresight made. Nobody references Drexler
because he's the grey goo guy. You can think on whatever timeframe you
choose, that's your prerogative, but you can only act on a human timeframe.
What's funny is, a lot of fears could be alleviated if we just admitted how
difficult this stuff is. Designer babies? Not likely!
BM
This archive was generated by hypermail 2.1.5 : Thu Jul 17 2003 - 16:17:08 MDT