RE: The Politics of Transhumanism

From: Smigrodzki, Rafal (
Date: Fri Jan 18 2002 - 13:46:29 MST

Answer to Anders,

I think our debate touches on two related topics: the core ethical concepts
espoused by persons sharing our mindset, and the methods we envision as
means to bring about the changes we are dreaming of.

You are right pointing out that a cohesive ethical system is, for a movement
with political aspirations, quite useful, if not indispensable, and the
means to an end are just means. You and I agree (to a large extent) on the
ethical principles that form the basis for our actions. However, when I
suggest an ethics-free use of the word "transhumanism" I have the following
goal in mind - our means (the use of technology to voluntarily change
humanity), can and will be accepted by persons and movements with ethical
principles differing from ours. It is likely that these will be groups more
similar to us than to Rifkinites, who are opposed to us as a matter of
principle. Yet, on the other hand, our potential allies in the support of
technology will have enough ethical disagreements with us to prevent the
formation of a single movement.

By formulating a strategy based on shared means (technology), rather than
trying to package ethics, politics *and* technology as a single
all-encompassing world view, it might be possible to achieve our goals
faster. It is easier to build and maintain narrowly-focused alliances, with
a lot of leeway for members. Let's say, libertarian-minded technophiles and
patient-advocacy groups - both want continued scientific progress, including
cloning technologies, but it might be pretty difficult to make them agree on
how much state involvement in health care is needed, if any. A
technology-based alliance of this kind would have a much better chance of
success against the unholy alliance of the church and luddites than any
memetically pure brand of transhumanism.

I think your reformulation of my example of Rifkinite and church alliance
was not correct. You were describing the danger of our goals being subverted
and marginalized by other members of a political coalition, but this not the
danger inherent in alliances with shared (sub)goals. As Fabio pointed out,
it is likely that 21st century politics will be dominated by a huge rift
between technophiles on one side, and luddites of many kinds on the other.
Each side will be comprised of many disparate movements, some of them
sharing almost nothing except their attitude towards technology. Joining the
technophile alliance will not jeopardize our hopes for technological
progress - we would not sit in the same organization as Rifkinites, and
would not be pressured to make painful compromises within the alliance, as
long as you do not insist on conformity to a particular world-view. On the
contrary, when sitting accross the parliamentary aisle from the luddites, we
could better defend ourselves from them if we have many allies.

This is why I do believe there is utillity in an (almost) value-free
formulation of a strategy (whether you call it transhumanism, or whatever),
separately from the full ethical, and philosophical system(s), such as

Fabio already mentioned that transhumanism without technology is just plain
vanilla humanism. I agree with him. But I am willing to change my mind if
you can you tell me what is the special ingredient that the "trans" brings
in, except technology.

As Mark Walker pointed out, technological transformation is
really just one part of a triad of ideas
( : "1. The Technology
Thesis: Within a hundred years humanity will possess the
technology to reengineer Homo sapiens. 2. The Ideal Thesis: The
goal in the reengineering task is to perfect ourselves. 3. The
Ethical Thesis: We ought to employ technology to realize this
ideal."). The technology part is just a means to an end, and if
you leave out this end transhumanism becomes arbitrary. It
cannot be a motivator for anything else than promoting cool
toys. Transhumanism is commonly criticized as being mere
technophilia, and seeing the core of transhumanism as technology
reinforces this. There is nothing wrong with liking technology,
but aren't we aiming higher than that? Saying that tech can
change us doesn't say what changes are desirable or not.

### Sometimes it is better to argue for the ethics and technology
separately, as I pointed out above.

If somebody brings up nazi ideas in quantum physics, they are
clearly not speaking in the right forum, and hence excluding
them doesn't have to be based on a value statement. But what if
the nazi is speaking about an issue in a suitable forum, like a
political one? Can you exclude him for being evil in your

### Yes, of course!

 In that case, what about freedom of speech for people
with unpopular opinions? (Of course, in privately run forums I
would say the proprietor has the right to set up the debate
limits as he or she wishes, but there is still a great deal of
difference between somebody who bases his actions on clear
principles and somebody who acts depending on some unknowable

### As long as your clients (=participants in the discussion and others you
want to influence) agree with your whims, it's OK, and most persons agree on
the excluision of nazis).


I can, given my view that transhumanism is indeed based in
humanism, firmly say that a nazi post (even when dealing with
the subject) in a transhumanist forum is wrong in the same way:
nazism is fundamentally incompatible with transhumanism.

### It is indeed better to argue from a cohesive set of ethical principles,
but the critical element is the acceptance of these principles by your
audience - if they don't, you are wasting your breath. But even if you
disagree on ethics, you can still sometimes agree on methods and
intermediate goals. You can build a value-unspecified transhumanist
alliance, keep out the nazis because no nice person likes them, and work on
technology, with ethical disagreements deferred.


Of course one can work together with people with fundamentally
different core values - often it is necessary. But if you do not
have core values that reach beyond a liking for certain
technology you will find yourself subverted and your work
becoming used to further agendas far different from your own.

### This is not what I meant - I am not for collaboration with enemies, just
collaboration with groups having reasonably compatible intermediate-range


On what grounds? Because the word has a bad ring? Because you
say so? What makes "Transhumanists against Eugenics" more valid
than "Transhumanists against Democracy?". In your view there is
no reason to favor one over the other, except political

### I am afraid you implied here that I might have no ethical convictions
except a worship for expediency. This is incorrect. However, as long as
expediency does not grossly conflict with ethics, it is expediency that
should direct the building of alliances, not insistence on all-inclusive


 I would say that there is very good reasons to favor
the first, not just out of expediency but because coercive
policies tend to hinder human development and can easily be
abused, and that democracy in most of its forms are far more
humanistic than any other government.

### Exactly my opinion, too.


> Second, and I think this is a more severe problem, is that trying to
> appeal to a lot of people by having more diluted values or ideology will
> mean that you get more people importing their own favorite ideologies
> into the movement and more people who don't get understand whatever
> central values there are and are more into it for the community or
> networking.
> ### You can always have a restricted-entry organization for the special
> people, and a free-entry club for the proles. The two together can achieve
> more, without spoiling the experience for anybody (at least initially).

Inner and Outer circles seldom work, since they tend to get out
of sync (as well as the usual sociopsychological problems of
in-groups and out-groups, power struggles and "my conspiracy is
better than yours"). The "elite" may have the core values and
ideology, but without them the other organisation will start
drifting ideologically on its own.

### If the alliance you are in starts moving in the wrong direction, you can
try to push it back, or leave. It's preferable to having no allies, and
being crushed by your enemies.


A good example is the swedish liberal party Folkpartiet. When
asked, their chief ideologist gave us quite transhumanist and
libertarian answers to a number of questions. At the same time,
the politicians of the party have been protecting liberty by
supporting bans on behaviors that may be dangerous (like going
by car without seatbelt, harsh rules against drug use and
prostitution), more regulation of the markets in the interests
of equality and freedom (including the state monopoly on alcohol
sales), and supported many very restrictive laws on research in
genetics. There is a total discrepancy between the "inner party"
and the "outer party" that actually does the political work.
This is what not taking ideology seriously leads to.

### This is what "listening to the will of the people" leads to. I agree
that diluting values for the sake of mass appeal is wrong, but avoidance of
cooperation for the sake of total purity is just as counterproductive.
Joining narrow, value-free alliances is sometimes the optimal course of


> As an example, compare this list now with how it was several
> years back in terms of actual idea production: the huge increase in
> diversity of people on the list has not led to an increase in quality or
> even quantity of ideas. Some of the old issues of Extropy are composed
> to a large extent of threads from the list, with amazing insight and
> creativity.
> ### But then, the low-lying fruit has been picked already. How much more
> you say about the right way of approaching the Singularity without new
> New arrivals cannot keep inventing the wheel all over again, and building
> spaceship takes time.

Actually, many of the ideas that were discussed were hardly
low-lying fruits: transhumanist architecture, utility fogs, idea
futures, reputation systems etc.

And there is plenty of important stuff this side of the
singularity: how to shape societies, economies, institutions,
legal systems and careers to handle the predicted dynamic high
tech society? How to handle the problem of destruction from
replicating technology? What ethical, aesthetic and cultural
principles to promote in order to produce a transhumanist
society? How would a transhumanist society even work? - these
questions have hardly been developed, new data is arriving daily
and they ought to be high priority for all of us.

### Well, nobody is preventing anybody from making seminal contributions
here. Having lots of people on the list does not impair the original
member's ability to continue with high-quality contributions. And filtering
out the chatter is quite easy.

> But I question
> the point in having a million members in WTA or ExI if their membership
> doesn't *mean* anything. A far more successful strategy is to create a
> real world view (and a world-view is far more than a view of
> technology!), make the intellectuals and cultural leaders recognize it
> and then watch the mainstream move in your direction. It has worked
> well in the past.
> ### Why not take both roads?

I want to invest effort rationally. They payoff in the
intellectual sphere is far greater than in the mainstream
sphere. If I can convince one intellectual (in Hayek's sense of
the group of people in society that process and spread ideas),
then he will go on convincing other intellectuals and spreading
the idea to the masses. Intellectuals certainly listen to the
masses and do pick up ideas, but usually the memeflow is in the
other direction. This means that the same effort in spreading an
idea is likely to produce a far greater (and faster replicating)
return if I concentrate on the cultural leaders.

### This is right, but sooner or later (hopefully sooner) you *do* want to
have a hundred million people supporting your ideas. As long as ExI has the
same ethical principles, membership will mean the same - a commitment to the
free growth of humanity beyond today's limits. Alliances with sufficiently
similar groups (as in "transhumanists" of all ethical stripes) can get you
there faster.

Answer to Mark Walker;

A pure form of the technology thesis is simply the idea that technology can
be used to transform humans. This is what I think Anders is rejecting when
he says that this should not be understood as the central meme of
transhumanism. Technology, after all, might be used to make humans better
slaves, or instruments of a fascist state. (Just to be clear, this is not

### How about defining transhumanism as "non-coercive technologically-driven
change (not perfectioning ) of humanity"?

The phrase "perfecting humanity" invites all kinds of trouble. Since the
idea of "perfection" is highly ambiguous, it is better kept out of

On the other hand, saying "non-coercive" is much less ambiguous, and clearly
excludes fascism, slavery, while still leaving it open for persons who do
not share the political views of most extropians.


 Your analysis of a single meme smuggles in the idea that
Person Engineering Technology (PET) is to be used to move towards an
ideal--I assume that this is what you mean by "improvement", and
furthermore, there is a gesture to ethics, I assume this is what you mean by
"right". The technology thesis without the ethical and ideal thesis is
amoral and blind. Without the technology thesis, the ideal and ethical
theses without are impotent. I don't think there is a central meme of these
three. What is the central meme of the concept of a bachelor? Is it more
important to be male or unmarried? The question verges on unintelligibility,
as does the question of which of this triad is the central meme. Mark.

### The technology meme might not an independent concept, but it is the only
new one, differentiating transhumanism from other views.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:35 MST