From: Anders Sandberg (asa@nada.kth.se)
Date: Sun Jan 13 2002 - 15:36:25 MST
OK, I have waited a bit with my comments, because I have been
awfully busy, and also interested in seeing what kinds of
comments would appear.
First, I think James has written an important paper. Not that I
agree with it or its conclusions, but in that it is an attempt to
start a transhumanist ideological discourse on a higher formal
level. We need that to develop ideologically, since it both
provides documents that can later be more easily referred to and
commented upon, and that it requires a broader view than just the
current thread on the mailing list.
My comments got a bit (!) longer than I intended, so I will
divide this mail into a number of sections. An abstract:
I describe the roots of transhumanism in renaissance and
enlightenment humanism, and discuss its relationship with the
collectivist "transhumanism" in the 20s and 30s. I also discuss
my own experience of how the transhumanist environment has
developed during the 90s. I then argue that transhumanism is not
an ideologically empty idea, but that inherent in it are strong
humanist values that resonate well with both libertarianism and
liberal democracy. These values are however not consistent with
collectivism or fascism, and not tied to technology per se.
Finally I discuss the need for transhumanism as a movement to
identify its core ideology and begin developing it.
Contents
1. The roots of transhumanism
2. What is transhumanism really about?
2.1 Transhumanism across the 90's
2.2 Core values of transhumanism
2.3 Critique of collectivist transhumanism
2.4 What ideologies are compatible with transhumanism?
2.5 The problem with a technology identified transhumanism
3. What should transhumanism as a movement strive towards?
1. The roots of transhumanism
James' paper claims that contemporary transhumanism is based in
anarcho-capitalist thought. I would say, given some views
expressed on this list and encountered in society, that one could
just as well claim it has technocrat communist roots. In fact, it
is likely more important to get away from those roots than the
libertarian ones.
The paper begins with the history of ExI, and this produces the
impression that ExI really *is* modern transhumanism. While the
influence of Max, Natasha, T.O. Morrow and all the other founders
and thinkers linked to ExI is beyond doubt, they did not start
out in a vacuum with their ideas. Later in the paper FM 2030 is
acknowledged, and further back even more remote historical
sources. I think this approach creates the mistaken impression
that transhumanism is something very recent and with shallow
roots.
But the true origin of transhumanism can be traced back to the
renaissance humanists. Mirandola's triumphant _Oration on the
Dignity of Man_ expresses the transhumanist project admirably:
``We have given you, O Adam, no visage proper to yourself, nor
endowment properly your own, in order that whatever place,
whatever form, whatever gifts you may, with premeditation,
select, these same you may have and possess through your own
judgement and decision. The nature of all other creatures is
defined and restricted within laws which We have laid down; you,
by contrast, impeded by no such restrictions, may, by your own
free will, to whose custody We have assigned you, trace for
yourself the lineaments of your own nature. I have placed you at
the very center of the world, so that from that vantage point
you may with greater ease glance round about you on all that the
world contains. We have made you a creature neither of heaven
nor of earth, neither mortal nor immortal, in order that you
may, as the free and proud shaper of your own being, fashion
yourself in the form you may prefer. It will be in your power to
descend to the lower, brutish forms of life; you will be able,
through your own decision, to rise again to the superior orders
whose life is divine.''
While the renaissance humanists were more concerned with issues
of human freedom and dignity than the possibility of immortality
or morphological freedom, there is no contradiction here and as
Brian Manning Delaney pointed out at TransVision 2001 they would
likely have embraced it. They in turn drew on the Aristotelian
ideas of eudaimonia, the life of excellence.
After the renaissance these ideas of human freedom, potential and
dignity became central for the enlightenment. Technological
progress was still mainly seen as separate from (but helpful to)
human progress, although early ideas of technological enhancement
of the human condition per se are suggested in the writings of
Benjamin Franklin and Condorcet. Many of the enlightenment ideas
are today so integral to liberal democracy that they are not even
recognized as being non-trivial and based on a certain
perspective on humanity (except possibly when challenged by
theocrats and conservatives, who have different views of what
constitutes a human being and the good life). When extropians are
called libertarians and anarcho-capitalists, it often turns out
that the issue is often their defense of enlightenment ideas in
opposition to later romantic and especially collectivist ideas.
The first real "transhumanist" vision (n.b. that I will call it
transhumanist here for simplicity, although I will in the next
section show that it is not a correct denotation) that clearly
included technological enhancement of the human condition was
stated by (as James points out) H.G. Wells, and followed by the
influential writings of J.B.S. Haldane and J.D. Bernal in the
20's and 30's. Now the ideas of life extension (as science, not
magic), cyborgisation, a future in space, posthumans, enhanced
intelligence - and of course eugenics - appear at full strength.
There is no coincidence that the above people were socialists -
as Bernal put it, "a good scientist is a communist": Marxism was
regarded as science, and a correct model of how the world worked
would need to include it. Science, society, progress and moral
righteousness were not separate things but a whole. This view was
prevalent even among non-socialists: it was the age of the
engineered or managed society, where both the left and right
envisioned a controlled society as the best possible solution.
It is important to note the collectivist emphasis here: the goal
was not to create a better life for individual humans but for all
of humanity - the collective, rather than the individual ("the
individual is only a function of the collective", as Marx put
it). The profound influence of collectivism in this period cannot
be overstated, regardless of it being Marxist class collectivism,
Nazist race collectivism or nationalist collectivism. It is no
coincidence that Stapledon describes his posthuman and advanced
alien civilizations as communist or fascist states, with global
consciousness as the supreme goal.
WWII devastated the collectivist form of "transhumanism" by its
association to nazism and fascism, as well as the absorption of
the socialists into the east-west dialectics. The stagnation of
technological development in the east - especially in the
biosciences - moved the idea of human enhancement into the mists
of social conditioning and dialectical evolution, i.e. back to
the old idea that the only changes necessary or possible were due
to social and mental factors.
The ideas of technological enhancement in the west instead took
the route through science fiction and the pro-science subcultures
that emerged during the space race, where they eventually
stimulated thinkers like FM 2030 and Max More. Note that this
postwar period also largely separated them from the social ideas
before - the enlightenment ideas were largely taken for granted
since they were embodied in the surrounding (American) society
and the idea of a unity between science and progressive ethics
was lost in the gulf between the two cultures of the humanities
and the natural sciences (especially since the humanities came to
drift in a somewhat more leftist direction).
2. What is transhumanism really about?
On Mon, Jan 07, 2002 at 10:45:19PM -0500, J. Hughes wrote:
>
> But a central point of my essay, and a point expressed by many, is that
> transhumanism, i.e. the idea that human beings should be able to improve
> themselves radically through technology, does not have much intrinsic
> political content. It is probably imcompatible with theocracy, although I
> bet a theocracy could prove us wrong (maybe the Scientologists). And
> racialism is bad science. But I don't think it makes sense to say that
> simply because most transhumanists are anti-Nazi that transhumanists can't
> be Nazis.
I feel it ironic that James began his essay by quoting me about
this. I have indeed argued in the past that transhumanism in its
pure form as rational radical improvement of the human condition
is largely independent from political ideology. But I have also
changed my opinion greatly on this since 1994 when the quoted
text was written.
The problem here is the definition of transhumanism, and what
goals it is assumed to aim for.
2.1 Transhumanism across the 90's
I think a digression about my own ideological development may be
in place here, because I also think it shows a change in
transhumanist thought.
When I started out thinking along transhumanist lines, I took a
wholly technocentric view. The goal was to maximize the
information content or complexity of the universe; to achieve
this certain steps of technological development were necessary
(such as space colonization and AI), and to make these possible
other developments, including political, economical and social
ones, were necessary subgoals. Society and to some extent the
individual were secondary to a grandiose technological
imperative.
As time went on, I realized that these sub-goals were in fact
important. A transhumanism that did not care for the individual
or the real, current world other than as a stepping stone would
never truly motivate anyone, get anywhere or if it did, become a
kind of techno-fascism I found distasteful. Instead I began to
focus on methods of self-transformation, the issue of how humans
- real, individual humans - could become better in different
ways. I did not concentrate on collectives since I knew that
people are unique, have different goals and opportunities.
However, I was still largely seeing the whole issue as one of
technology - softer technologies like mental training and
biotechnology, but still a process driven by technology. I also
did not care for ideology; having grown up as an apolitical
person and with a serious distaste for collective dogmas I viewed
it as a nuisance. Openness and tolerance appeared a far better
way of maximizing the diversity and impact of transhumanism. This
attitude was also mirrored quite widely: instead of closed
mailing lists where agreement with fundamental tenets was assumed
the lists were opened or other, open forums appeared. In Aleph,
we explicitly pronounced ourselves non-ideological and open to
all transhumanists.
This was the position I had in 1994 when I wrote the quoted text:
transhumanism would, through its inherent positive technological
effects, not need any specific ideological position other than
those necessary to bring about the effects. In a way it was a
mirror of utopian socialist idea that as technology marched on
everybody would become socialists.
But the story does not end there. The more I interacted with
people inside and outside transhumanism and learned about the
history of ideas and technology, the more I realized that
*politics does matter*. Philosophy matters. Technology doesn't
solely drive social and cultural changes, quite often they
instead drive technological development. Without any values to
motivate and direct transhumanism it would not get anywhere, and
quite likely become hijacked by other groups. Even more, not all
interpretations of transhumanism were meaningful - quite a few
were simply transhumanist ideas of technological transformation
arbitrarily tacked onto political views with no attempts as
consistency. As the breadth of transhumanism increased, it also
became shallower. In the ever more noisy list environments any
discussion that did not keep to strictly defined engineering
matters and instead moved into issues related to ideology and
current day policy would be embroiled in a low-quality discourse
where most participants lacked significant knowledge and instead
let their opinions play freely.
I more and more felt that transhumanism as a term had lost any
meaning if it could denote both someone advocating a centralist
socialist economy, someone advocating anarcho-capitalism and
someone advocating mystical contemplation as long as they all
thought nanotechnology was great. Transhumanism seemed to be
nothing but technophilia or even just awareness of radical
technology as a possible future option (which was a definition
that circulated briefly during the work on the Transhumanist
Principles - according to that definition Jeremy Rifkin would
have been a transhumanist! That observation of course quickly led
to an amendment of the definition).
Over the last years I have come to the conclusion that we need to
rethink the core values of transhumanism, or rather, define the
core values. The problem is that the term transhumanism is by now
so diluted that it hardly means anything, and that there hardly
exists any standards body that can claim "ownership" of the term
or its official definition - WTA might be the closest thing, but
we shouldn't overestimate the impact of such pronouncements.
2.2 Core values of transhumanism
Is there any core values of transhumanism? Note that the
definition of transhumanism as "rational radical enhancements of
the human condition" does contain an assumption that there has to
be some underlying values being furthered - something is enhanced
relative its past state. It also implies some concept of human
condition (and of what is rational and radical, but I'll leave
those for another post).
Where does these values come from? Either they are inherent in
transhumanism, or they come from other assumed value systems such
as political ideologies hooked on to transhumanism as some kind
of external motivating engine.
But in the later case it seems pointless to discuss transhumanism
at all, since the issue would rather be "How should we socialists
work to bring about the nanotechnology revolution" -
transhumanism would simply be a part of the big program of the
ideology than anything independent of the ideology. In this case
the transhumanist forums would be just meeting places of people
of different politics sharing a few goals, just as there are
non-political forums of peace, environmentalism or furthering the
humanities. If this is true, then transhumanism is nothing more
than a shared issue.
But this is contradicted by the history and scope of
transhumanism, the amount of interest that has been focused on
problematizing the human condition, examining the consequences of
human enhancement and its relationship to the world. I would
argue that transhumanism is in fact - or should be - seen as an
ideology on its own.
If we look back at the roots of the idea, we see that - with the
exception of the pre WW II collectivist "transhumanists" - there
were strong underlying assumptions about what it means to be a
human and what the goal of a human life ought to be. The human
concept is the humanist one, with an independent yet social human
seeking individual self realization. If this humanist conception
of human and the individual good is extended (mutatis mutandis)
to current technological visions we get essentially the
"mainstream" transhumanist view (here in a somewhat abridged
form):
Humans have an inherent value in themselves, a human dignity that
must be respected. They are uniquely individual, yet social
beings. They have different goals, ambitions and abilities, but
in general they achieve happiness and fulfillment by striving to
excel along their freely chosen directions. There are many
different tools and methods that can support this striving for
individual excellence, including help or interaction with others
as well as technology. The goal should be to develop oneself to
one's fullest potential.
These humanist values and ideas embodied here are in my opinion
core values and ideas of transhumanism, and not just something
externally added on. A movement might advocate technological
transformation, but unless it is based in humanism, it cannot
honestly call itself transhumanism or any other title including
humanism. It would be just as semantically incorrect for people
disagreeing with Marx to call themselves Marxists (it might of
course be politically expedient, but for the moment we are
looking at the definitions of core ideas rather than how they
best can be implemented).
Liberal democracy is also built on much of this humanist concept.
It is founded on the idea of the rights of individuals, rights
which in the end are traced back to the properties and goods of
humans and the desire to further the individual good. What
distinguishes liberal democracy from collectivism is that the
well-being of the existing individuals is the goal rather than
the well-being of an abstract class, and that their rights are
valid even when they interfere with the desires of the majority.
There does not exist any contradiction between the transhumanist
core values I have suggested above and liberal democracy.
Also, there is no contradiction here with a libertarian stance,
since it is based on the same respect for human rights. The
difference between a radical anarcho-capitalist and a liberal
democrat lies mainly in the emphasis and relative ranking of
certain rights, general views on how society should be organized
and - usually the controversial part - what level of coerciveness
is acceptable. While this tends to lead to loud and long-winded
debates here and elsewhere (and in practice of course deals with
extremely important social issues), it is imperative to recognize
that both parts (and the whole spectrum between them) have more
in common with each other than with the groups with a
non-humanist perspective.
2.3 Critique of collectivist transhumanism
I have already criticized how collectivist perspectives place
abstractions such as humanity, the race or the collective above
the existing humans, and not just treat them as theoretical
primaries but also the beneficiaries of the good society. This
perspective makes humans mere tools to support abstractions.
Many collectivists would however protest against this, since they
claim their aim is truly to help the existing people en masse.
But it is unlikely any approach that attempts to externally
impose the good on people or bases this interaction on a
collective view of people will be successful, at least not if one
has a humanistic view of what it means to be a human.
The good life is not universal. We are all unique, with different
background, predilections and opportunities - my ideal life is
utterly different from your ideal life, even if they may share
some or many elements. The individual usually knows best what is
good for him or her, so most systems that attempt to centrally
bring about the good to everyone will in fact usually not supply
it at all. But even if some system could know what the best
possible life is for every person, it could not force them to
live it. It is impossible to force somebody to be rational, since
rationality is a volitional state and cannot be activated by
somebody else's coercion (at best it is activated to deal with
the coercion attempt). That a human changes their life situation
has to be based on knowledge about the situation and why it can
become better, as well as their own will to change it. To impose
a decision on somebody, even if it is a good one, is not a way to
get them to understand their situation and begin a better life.
In fact, it treats them as a lifeless object and makes them
dependent upon others.
Even when certain things may have been good for us if we had
chosen them on our own, they may not be good for us when they are
imposed from the outside. In this case we are being reduced to an
externally controlled robot, and we lose an important aspect of
our humanity, our agency. Agency is necessary for living a good
life since we need to be able to take responsibility for our own
actions (and this responsibility vanishes when somebody else
controls our actions) and to develop our full potential - it is
the active life that is the good life. Without agency we cannot
grow, we can at best be shown how a good life might have appeared
from the outside but we do not experience it.
A good life is not anything we create for others, because it is
not possible to give another human prestations, achievements,
happiness, self-realization or self-worth. They can only be
created from inside through moral, character, willpower and
integrity.
Freedom may not be a sufficient condition for living a good life,
but it is a necessary condition.
2.4 What ideologies are compatible with transhumanism?
If transhumanism is to be interpreted as driven by humanist
values rather than any imported value system, then it is clear
that not all political aims or ideologies are compatible with it.
As I have argued above, the basic humanist vision is compatible
with both libertarian rights-based views and not too coercive
liberal democracy, and it is incompatible with most forms of
collectivism. If we now turn to the examples of fascist
transhumanism brought up by James, we can apply the same
analysis.
The Prometheans are calling for loyalty to and sacrifice for an
eugenic super-race. There does not appear to be any real concern
for achieving human happiness or individual excellence, all is
subordinate to the survival of the race. This is a good example
of collectivism at its worst, and obviously has nothing to do
with any form of humanism. Hence it cannot be regarded as
transhumanism.
The debate about the removal of the Xenith site is otherwise a
good example of why a better definition of transhumanist core
values is necessary. The problem was the lack of consensus or
official position on what is and isn't transhumanism rather than
a sizeable support for the ideas. With a better awareness of the
ideology inherent in the term transhumanism the issue would have
been far easier to resolve.
A group that was not dealt with much in James' text is mystical
transhumanism. Although some of the millennialist aspects of the
Singularity concept were mentioned, a far more common thread is
the tendency towards "cybergnosticism" that Mark Dery derided and
the intoxication with cosmic perspectives that combine
transhumanist trappings with what is essentially a pre-rational
world of Powers, grand evolution towards the highest and personal
transcendence. By its openness the transhumanist movement has
acquired a sizeable contingent of people who are more interested
in the mystical overtones than the practical reality.
Mystical transhumanism fails through its irrationality; most
adherents are not seeking to actually become posthuman through
their own efforts, but rather to achieve it thanks to the mercy
of posthuman deities. While this passivity is not per se against
humanism, it seems that it does not produce any incitement
towards personal excellence in most of its adherents. If one adds
the rationality aspect of transhumanism to the definition,
mystical transhumanism is not transhumanism in the same way as
laying on the sofa waiting for the revolution of the proletariat
to provide for one is not a correct interpretation of socialism.
The concepts of radical democratic transhumanism mentioned cover
a broad range, from specific issues like Haraway's cyborg
feminism over welfare reforms like guaranteed income to utopian
(nano)socialism and Ken MacLeod's pro-dynamist skepticism of both
the traditional and non-traditional left *and* right. It is not
clear what the connecting thread is, except that they are not
part of the libertarian perspective (which is doubtful in the
case of Ken MacLeod) or the strongly collectivist perspective
(which is doubtful in the case of Singer). It does not appear
that a political program or society could incorporate all or even
most of these.
Still, as a hypothetical program one could imagine something akin
to a modern western liberal democratic welfare state, with a
commitment to equalization of opportunity through voluntary
treatments and a certain level of income redistribution, basic
guaranteed minimal income and medical treatments and culturally
dominated by ideas of tolerance, cooperation and the rights of
minorities. While there are many practical issues about the
financing of such reforms, how to ensure government
accountability and fairness, and what limits on coercion this
state would pose, the main question is whether this is compatible
with the transhumanism.
As far as I can see, such a system may be in accordance with the
humanist conception of the human as a striving, self-realizing
being: help for individual striving is provided, but giving help
or advice is not agency-limiting coercion. The protection from
abuse is also in full accordance with humanist rights. Where
things get iffy is the secondary consequences of providing these
things; since some form of taxation has to be employed to pay,
means of coercing taxes from the citizens (or their
organizations) have to be included which limits individual
freedom beyond the limitations that appear due to the reciprocity
of rights (if I claim to have a right to my property, I must also
acknowledge your right to your property or end up in
inconsistency). This could in principle be avoided if the society
was recognized as a voluntary organization where citizens agree
to play by the legal rules and submit taxes for their mutual
benefit.
There are other potential effects counter to humanist aims due to
strong equalization attempts, such as equalization of outcome
(which can occur even if the aim is equalization of opportunity
if citizens achieving good outcomes are relatively penalized both
by taxation and by extra support given to competing outcome-poor
citizens) and the risk of mistaking or switching negative rights
for positive rights (a quite common tendency today, where rights
are often interpreted as entitlements). But given the assumption
that this society rests on a voluntary basis there is no
fundamental ethical reason why it would not be transhumanist. It
might be less (or more) effective in supporting human excellence
than other possible societies, but if the participants have
agreed on this society it is their problem and by the same
arguments given before against collectivism it would seem that it
would be inconsistent with a transhumanist position to prevent
them from doing so. On the other hand, forcing people into
agreements to join such a society is by the same arguments,
counter to the humanism underlying transhumanism.
To sum up, the version of radical democratic transhumanism I have
sketched here seems to be in accordance to the basic humanist
values as long as it is voluntary. Adherents to this form of
transhumanism will of course argue that it is a more effective
way of achieving human excellence than other forms such as
libertarian transhumanism, but the effectiveness issue is not as
important in this context as the recognition that both sides
share important core values. That they also have very divergent
outlooks is not a problem, since in principle a voluntary radical
democratic transhumanism (somebody better invent a shorter term!)
can co-exist and cooperate when suitable with a libertarian
transhumanism. Just as individual excellence can be pursued in
many unique ways, there is no reason transhumanism - even with
defined core values - has to be expressed or implemented in a
single model. But the same arguments that lead to humanist
recognition that this pluralism of individual striving must be
respected lead to the respect of people to freely choose what
social models to join.
It seems to me that the best way of reconciling the differences
between the different views is to recognize transhumanism as a
meta-ideology, not attempting to prescribe all aspects of
political ideology but providing an underlying set of values and
assumptions.
2.5 The problem with a technology identified transhumanism
I hope that by now it is fairly clear why I do not accept a
definition of transhumanism only in terms of its commitment to
advanced technology. Such a definition has no core values (other
than possibly that technology itself is somehow a good), and even
when paired with another ideology such as utopian socialism
(nanotechnology will provide us with enough material plenty to
enable a true communist society) the result is meager. Besides
just creating a slightly updated variant of an old ideology, it
also makes this variant strongly dependent on the actual success
of the technology. Should nanotechnology never arrive, or have
properties making it unsuitable to the vision of public domain
matter compliers in every corner, then the nanosocialist vision
crashes.
But the future development of technology is highly uncertain
(although plenty of transhumanists suffer from technological
determinism, one of the less desirable inheritances from the
enlightenment progress idea) and subject to not just random
accident but complex social, economic and political factors,
where ideologies and visions are important. Connecting an
ideology too strongly to a certain technology is a recipe for
either becoming obsolete, or an invitation towards technocracy
where the ideology now motivates its adherents to implement the
technological vision it believes in, regardless of whether it is
truly efficient or achievable.
Transhumanism is not about technology, it is about placing
technology, the human condition and their interactions in a
humanist context.
This escapes the trap of requiring certain technologies to
achieve transhumanity - we might live in a cruel universe where
most of the technologies discussed on this list are impossible.
But that would not make transhumanism worthless, merely limit
what forms of excellence we can aspire to. Recognizing that
transhumanism is not just about technology also avoids the trap
of assuming technology is all we need, which can lead to
technocracy ("If we can control technology we can control
everything") and technonaivism ("Technology will fix all
problems") - two of the main accusations commonly leveled at
transhumanism.
3. What should transhumanism as a movement strive towards?
I have now dealt with some of the roots of the transhumanist
movement, as well as a concept of transhumanism that is firmly
based in terms of the humanist perspective and an ideology in
itself. Now, what do we do about it as a movement?
It is important to remember that there is a difference between
the movement (which consists of a number of persons adhering to
views they themselves call transhumanist or share with other
self-professed transhumanists) and the ideology of transhumanism.
As James showed in his paper at present the movement encompasses
many mutually incompatible ideologies. As I have argued, only a
part of these views are actually compatible with the name
transhumanism, and these share underlying core values that are
actually found in mainstream transhumanism, be it libertarian or
democratic left in outlook.
Redefining the term transhumanism in a more narrow sense and with
clearly defined values is trivial; the nontrivial part is to turn
the redefinition into political reality. Maybe one could call it
humanistic transhumanism, but such a redundant term seems far
less appealing than either reclaiming the term transhumanism,
developing a new term or maybe ignore the naming issue altogether
- it is in many ways a trivial distraction, and a movement that
needs a shared name to exist clearly lacks the shared ideas to
survive.
As James points out the transhumanist perspective is under
serious attack from a variety of directions, and it is clear that
unless the movement does something it and its views are going to
be relegated to an insignificant underdog position (something
that may actually appeal so some people, since being an underdog
culturally in the west has been viewed as a sign of purity and
righteousness - as well as a way of getting one's own small,
manageable social corner rather than have to deal with complex
issues in a setting where there is a plurality of opinions and
one is just one part among many). However, if there is no other
transhumanist ideology than "technology often is good" then it
seems unlikely many people would invest their time and effort to
counteract the often persuasive and passionately delivered
opposition - it is too broad and too shallow. Instead, if
transhumanists are ideologically aware of their underlying
values, how these values also underlie many other highly regarded
institutions in modern democracies (everything from freedom of
press to the dignity of man) and their long historical roots
connecting them to an immense amount of philosophical and
cultural capital - then they can find not just motivation but
powerful allies.
There is a danger in assuming that the broadest possible movement
is the best. There are always people eager to flock to any banner
but who do not contribute anything, do not share the basic values
or are willing to compromise them in exchange for small
victories. This problem becomes worse if the movement is overly
broad and does not know its own values. When discussion turns to
actual politics, it can be devastating to not have developed
one's own ideas fully.
We have seen many examples during the 20th century of how
strongly ideological groups, even when they wield minimal power
or have relatively few members, have affected broad policies and
shifted political consensus in their direction by exploiting the
lack of ideology of their opponents (e.g. the environmental
movement and parts of the left). What happened was that the less
ideological part made a compromise and moved halfway to their
opponent, while the opponent remained at their original position.
The result was that the middle of the road moved ever towards the
ideologicals, while the compromisers found themselves ever more
biased.
We are currently seeing how the corporativist-technocratic
establishment is starting to crack; instead of the until recently
mainly technological/administrative discourse in politics value
issues are becoming more important again. This is an opening, but
also a terrible risk. There are many other values out there, and
many of the currently most successful values are not particularly
conductive to human flourishing - be they religious or political
fundamentalism, romantic nationalism with fascist overtones,
national security as more important than human rights in the US
or European "throw out the niggers so we get more welfare" racism
in Italy, Holland, Denmark and Austria.
Virginia Postrel's description of the struggle between dynamism
and stasism may be a (helpful) oversimplification, but one thing
is clear: the anti-humanist ideas above are clearly on the
stasist side in that they seek to control and limit the future
and human potential. On the dynamist side we find ideas about the
right of humans to set up their own goals, to create their own
life projects freely - necessary conditions for transhumanism.
Dynamism is in the end about having an open ended society, where
creativity and freedom are allowed to generate progress even when
it is unpredictable. Transhumanism as a humanist ideology would
be squarely on this side.
In the end we as transhumanists want to live in a dynamistic
world where our own aims of personal and social excellence are
possible to pursue to their fullest. But to reach this we clearly
need to 1) recognize our core values, 2) extend our
ideology/ideologies from these values, 3) show the other people
the moral and practical benefits of embracing them as practical
policies. This is a large task, and cannot be done in the wrong
order.
Transhumanism has so far never been truly fighting a political
battle, neither as an ideology or as a movement of people. So far
it has never produced any truly hard and innovative challenges to
the current dominant ideological systems. That has to change.
The ideologies of today in general have their roots in other
material and cultural conditions. If we seek to retain ties to
ideologies that at best developed in 1850 (and in several cases
long before that) because they are our only source of values,
then we are bound to lose touch with reality. We cannot rely on
naive faith in technology.
But we can recognize our core humanist values and build on them.
In a changing world, they are surprisingly stable and likely the
last things we will ever change in ourselves in some unimaginable
posthuman future. We can strive for human flourishing and
recognize what is and isn't transhumanism in this sense. We can
extend our ideas beyond a mere liking for technology into an
integrated ideological framework wide enough to encompass a wide
spectrum of practical dynamist position - while at the same time
having clear enough goals to avoid getting shallow and
opportunistic.
I cannot say whether things will get better if we change; what I
can say is they must change if they are to get better.
-G. C. Lichtenberg
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST