From: Bryan Moss (bryan.moss@dsl.pipex.com)
Date: Sat Jul 19 2003 - 19:42:58 MDT
Dan Fabulich wrote:
> Any suggestions?
I thought I'd already made one! My contention, far from wanting to
immobolise everyone to action (which would be hypocritical for I, like you,
am an armchair extropian), is that immobility is inherent in our
philosophy, and we need first to purge our theory. I think Extropy is a
child of the computer age and has inherited certain features, one of which
is Inevitability. The computer revolution was couched in very teleological
terms; the great Telos of this revolution was Intelligence. A lot of good,
honest work in computing goes ignored because the next product iteration (or
the one after that) might be closer to this ultimate end. We talk about it
in terms of progress, but that's not really true at all. Computer progress
has been fairly deterministic, both in terms of hardware and software. What
people are talking about when they say "I'm not working on Y because Z is
probably just around the corner" is progress-towards.
Whenever we talk about Artificial Intelligence we assume this Inevitability;
the debates about thinking machines have always focused on whether a
thinking machine can be really said to be thinking, rather than whether one
can be created. This has lead to some strange conventions on both sides of
the debate. If I argue against AI with a sort of negative proof, where I
show, that should an AI be created, it wouldn't be capable of genuine
thought, I give no accurate picture of when or in what form the inability to
create an AI will manifest itself. Will we keep progressing towards the
goal of a thinking machine only to realise at the last minute that it isn't
thinking at all? I think most people that make these sort of arguments
against AI, although they speak in very abstract terms, zombies and such,
expect the lack of genuine, conscious thought to manifest itself in some
way, and, if they considered it, they'd probably expect AI research to not
get off the ground at all. On the otherside of the debate, there's this
uncritical approach to method; the researchers hold that a thinking machine
will be capable of genuine thought, and that seems to be enough for them.
If it's possible for a computer to be intelligent, then we'll just sit down
and program it. We're intelligent, right? Who better to do it!
The general argument I want to make is that our ultratechnologies are a
product of this ominous big-I Inevitability, rather than vice versa. We can
remove them and be free from this choice of extremes you describe.
The fact remains that there is no reason to think the (or a) Singularity
inevitable; none, nada, zero, zelch, zip. It's a prediction based on two
things: (1) Moore's Law; (2) some rough calculations of the processing power
of the human brain. The first is a product of smoking crack, the second the
result of speculation. There's no such obvious correlation between the
brain, that lump of electro-chemical gunk, and the microprocessor. The
numbers we use to predict the Rupture (Singularity) or decide how many
"potential lives" are lost by not colonising space or nuking North Korea
(for fucks sake) are rough, back of the envelope calculations by a
roboticist that make some massive, sweeping assumptions about the brain.
When someone expresses the opinion that, perhaps, the brain isn't easily
simulated, they're usually met with, "I doubt anything quantum mechanical is
going on," which forgets that there's no simple correlation between
classical physics and classical computers. An actual simulation of the
brain, its biological and chemical processes, is most likely impractical. A
"simulation" of thought processes is pure pseudo-science, as is any claim to
a "general intelligence." You can argue that not everything that happens in
the brain (physically speaking) plays a functional role in thought, and
that's fair (depending on your criteria), but if you start arguing that you
know what does and does not play functional roles, in any strong sense,
you're most likely being disingenuous. My point here is not to say that
these things are impossible in any absolute sense, my point is to show that
they are not inevitable (or necessarily even plausible). The inevitability
enters elsewhere, in the sense of this general Telos, our futurism, which we
organise these ideas around. It is not genuine.
The ultratechnologies are the biggest, most obviously damaging problems in
our philosophy. The smaller problems are best demonstrated through
language. This word "immortal," for example. I think it's with a certain
glee that we use this word that, to the majority of people, has religious
connotations. We seem to enjoy co-opting ideas from the religious and
reconstituting them as something very secular. Personally I think we'd do
better to leave the immortality to the spirits. When you say "I'm an
immortalist" the rest of the world hears "I'm an arrogant prick." I am not
an immortalist, I am not overcoming death, I am not angry at a world that
would let millions of people die every year due to old age. I recognise a
certain event in history, a certain truth, about our nature, about our
bodies. I'd position this around the flourishing of cellular biology, the
invention of the defibrillator, etc; those discoveries and technologies that
did not so much afford the opportunity to overcome death as to put the
status of death into question. The ideas we need to get out to the world
are: that the status of the body has changed, we are no longer whole, the
process of life is a continuing process of creation and destruction; and
that the status of death as a genuine event, as the placeable end to your
life, has been called into question. I think it's wrong to reconstitute
death as a process; putrefaction is a process, death is cultural, not
bodily. That's our lesson: the event of death is something cultural, some
malleable, and is subject to cultural difference. Rather than harking on
about the loss of lives to aging, which marks us out as kooks to most
people, we need to present ourselves as what we are: a subculture, whose
values are as valid as traditional values and adoptable by anyone. Perhaps
one day we'll be more than a subculture, but for now this is what we are.
One of the general points I'm trying to make here is that we need to foster
a sense of humility. Another example through language: "genetic
engineering." Never has such an unfortunate term been coined. For some of
us, it's a very practical, very concrete thing, and the word serves that
well. What's more practical and concrete than engineering? The term
"genetic engineering" lends genes a solidity; we can move them, build with
them, engineer. But both "genetic" and "engineering" carry mutually
supporting alternative intepretations. "Genetic" can mean hereditary,
natural, fundamental. To "engineer" can mean to manipulate, to control, in
the historical sense, the social sense, the behavioural sense. This use
of "engineer" carries with it the connotation that approaching something as
mechanical is dehumanising. But again, we revel in it. Researchers
gleefully make references to the "book of life," DNA is written in "God's
hand," we're unravelling Nature itself. This is a turning point in history,
a great event, we have read the human genome. The reality is both more
mundane and much more interesting. DNA is not the "book of life," it's not
the static tome that revels our innermost being, it's dynamic, it's not the
description of life but life its very self. Again, we need to get this out
to people: we're no longer whole, we're teeming with life. I remember the
first time I read Engines of Creation; what struck me most was not the
wonderful vision of the future, or the vast potential of nanotechnology, but
the description of the machinary already at work in our bodies. I think one
of the stumbling blocks for people with genetic engineering is that they
still think of themselves as an impenetrable unity, genetics seems like
something occult, when really it's business as usual. The desire to shock
and amaze is innate, people love to tell stories, so we tend to avoid
demythologising these issues. Instead, we think we'll convince them with a
*different* amazing story, the amazing story of how their lives will change
for the better. But the truth is, these stories are of the same kind.
It's almost a cliché to say that our culture is one that's wary of "grand
narratives," but we *are* just out of a century that is marked by the
atrocities committed under names such as National Socialism and Communism,
so it is perhaps a fair assessment. I will suggest here that the
informational theoretical valuation of life, as expressed in Nick Bostrom's
paper and Robert's recent post, is as dangerous as the racial valuation that
led to the artocities committed under the name of Nazism. People aren't
scared of science, they're scared of us: those who'll take science a step
too far. When people express trepidation that they'll be "reduced" to their
DNA or to machines or whatever, we announce, again, with a certain glee,
that that is all we are. "What," we chuckle, "you thought we had souls? A
spirit?" But their fears are not so easily dismissed; they may often be
expressed in terms of the spiritual, the religious, the intuitive, things we
tend to deride, but they speak to a genuine fear: that human life will be
reduced to something only of economic value. That human lives will be
ordered, their relative merits weighed, and some will be valued over others.
This is easily posited as a question of ethics, but I think that's a
mistake. To solve this particular issue as a question of ethics, is to put
ethics before all else, a move I'm not fond of because I think it puts
knowledge, truth, etc, in too difficult a position. It remains a question
of philosophy, which I would resolve with the following suggestion: human
life can only have a situated value, a value in relation to something.
Robert talks about triage, this is a definite situation. "Which one do we
treat first?" This is already something economical. The flaw in Robert's
genocidal suggestion, I propose, is that it's too close to attributing
absolute value to human lives, which we can see in its invocation of the
awkward idea of "potential people." This is only a tentative suggestion
because it leaves open the problem of what constitues a situation, what
separates a situated valuation from an absolute, and these things need to be
clarified. But, intuitively, it speaks to me: it's very easy to ask of
racism, for example, "what makes race x better?" The answer is usually
given as a rather arbitrary list or an appeal to some absolute value: race x
is better because race x is race x. Racists spend a lot of time looking for
ways to devalue other races, but ultimately this is just a way of expounding
on the supposed "truth" they already hold. Also note that Robert's
suggestion is an argument along the same lines as the "ethics first"
solution. "If all human life is sacred," he asks, "then isn't more human
life more sacred? If destroying some human life will ultimate cause more
human life to flourish, isn't that okay?" We reply: "No, all human life is
sacred!" There isn't a satisfactory ethical solution.
My thesis, then, is that there's something rotten at the core of Extropy.
The reason you can't find a way to connect extropianism with practical
action in your life is that extropianism, as it stands now, is out of touch
with life generally. It's exactly about inaction, futurism, Rupture, sit
tight and let technology take you for a ride. This may not have been its
founder's intention and this may not be true of all its supporters, but
that's basically what we have. We're a product of our time. My interim
proposal is humility. There's a lot to be done here and now, if you can
just bring it back into focus. My long term proposal is reevaluation of our
entire philosophy or the establishment of something different. To be
honest, the idea of activists for the future, which is what a pro-active
extropianism amounts to, is rather ridiculous. This isn't a case of "the
future will be great if you don't fuck it up." It never was. I don't think
even "we can build a better tomorrow through hard work" quite cuts it for
me, because it still contains the kernel of that futurism, that
Inevitability. What would an extropianism without futurism be? A stance on
technology, on humanity, on culture, on the body, but without the appeal to
a future, a telos, without the utopianism. In a sense, it would embrace a
degree of uncertainty along with its humility. I would hope too that it
would embrace a sense of its own limits, it own bounds. To often we want
to pave over everything. I still remember a particular suggestion, made by
Eliezer I think, that the Singularity is so Different that it reduces all
our cultural differences; these differences no longer matter in the shadow
of the Absolute Difference of the Singularity. We see this again in
Robert's proposal, "I mean really -- if a 1 cm^3 nanocomputer can support
100,000+ human minds our 'individuality' is probably overrated." These
statements, always said with that same, certain glee, echo some of the worst
excesses of modernity. I think there's some truth in much of contemporary
theory when it states that Western culture (which is now almost synonymous
with American culture), even its science and technology, is not a universal,
not progress per se, but just a particular expression of a particular
culture at a particular time. I think they overstate the case; I think the
strongest case that can be made is that whether something is universal or
particular is undecidable, although both must exist, otherwise the
suggestion is too problematic (not that I agree with this either). However,
even if the denial of the universal subject, of truth, of knowledge, etc,
takes things too far, we can still use this observation. We can exercise a
certain caution when we talk of progress, of the developing world, etc, and
take care not to overstate *our* case. Not everyone shares our values, not
everyone has to, and unless we find ourselves in a situation of relative
values, a conflict, we need not reduce others to valuation.
So, what should we do, in terms of practical action? I think, with a
certain humility, and *without* the sense of victimisation that comes from
(needlessly) setting ourselves so far apart from mainstream culture, a lot
of avenues start to open before us, our project becomes more manageable,
more real, our thought takes on human scales. This is really what action
is: thought on a scale that leads to intervention. The intervention itself
is usual immediate and obvious, it's usually clearly situated and well
bounded, it is (or can be) an extension of thought. Examples might be
activism for the acceptance of genetically modified foods, education about
how death is an ambiguous event, countering the theses of people you
disagree with, etc. Can you write a critical review of one of the recent
books on the dangers of biotechnology and life extension? Get it published
somewhere prominent? We need to get ideas out there, but they need to be
sociable ideas. There's too much scoffing, outright dismissal, etc. On a
smaller scale, being able to express your ideas in mixed company without
garnering looks of disgust is great goal to work towards. But it's a goal
that requires humility (again) and respect; I'm not talking about marketing
or spin, I want to see extropianism embrace a genuine desire to be in the
world, this world, right now, enjoying the things everyone else enjoys.
BM
This archive was generated by hypermail 2.1.5 : Sat Jul 19 2003 - 20:16:36 MDT