From: Anders Sandberg (firstname.lastname@example.org)
Date: Thu Jan 17 2002 - 17:06:42 MST
On Tue, Jan 15, 2002 at 11:25:58AM -0500, Smigrodzki, Rafal wrote:
> Anders Sandberg [mailto:email@example.com] wrote:
> I think it is wrong to identify transhumanism with the technology part.
> Mark is right in pointing at it as an distinguishing aspect of
> transhumanism, but it is not enough to distinguish transhumanism in any
> reasonable sense of the word from many other views. It is also very easy
> to be made obsolete as technology develops.
> ### But then how will you in a concise manner refer to the "central meme"?
> We need to have a name for it - it's easier to explain it to strangers if
> there is a word for it.
Actually, I think the idea of technology being the central meme
is horribly wrong, and if we present ourselves based on *that*
we will lose much.
As Mark Walker pointed out, technological transformation is
really just one part of a triad of ideas
(http://www.markalanwalker.com/what.htm : "1. The Technology
Thesis: Within a hundred years humanity will possess the
technology to reengineer Homo sapiens. 2. The Ideal Thesis: The
goal in the reengineering task is to perfect ourselves. 3. The
Ethical Thesis: We ought to employ technology to realize this
ideal."). The technology part is just a means to an end, and if
you leave out this end transhumanism becomes arbitrary. It
cannot be a motivator for anything else than promoting cool
toys. Transhumanism is commonly criticized as being mere
technophilia, and seeing the core of transhumanism as technology
reinforces this. There is nothing wrong with liking technology,
but aren't we aiming higher than that? Saying that tech can
change us doesn't say what changes are desirable or not.
The central meme of socialism isn't government control, the
central meme of libertarianism isn't free markets. They are just
means to ends (equality and liberty, among others).
I think the concept "the central meme of transhumanism" ought to
be staked through the heart and buried behind an event horizon -
it is too dangerous and seductive through its apparent
simplicity, catchiness and ecumenism. :-)
> First, how do you motivate the exclusion of nazis? You cannot motivate
> it by saying they are not transhuman, so you have to motivate it from
> some other core values, or in that while their views are acceptable
> views they hurt the movement in a practical political sense.
> ### All you need is say they are bad people and you do not want to have
> anything to do with them. If you are at a quantum physics conference, you do
> not need to prove that the scientific views of Nazis are wrong, you just
> kick them out because they are in your opinion evil.
If somebody brings up nazi ideas in quantum physics, they are
clearly not speaking in the right forum, and hence excluding
them doesn't have to be based on a value statement. But what if
the nazi is speaking about an issue in a suitable forum, like a
political one? Can you exclude him for being evil in your
opinion? In that case, what about freedom of speech for people
with unpopular opinions? (Of course, in privately run forums I
would say the proprietor has the right to set up the debate
limits as he or she wishes, but there is still a great deal of
difference between somebody who bases his actions on clear
principles and somebody who acts depending on some unknowable
I can, given my view that transhumanism is indeed based in
humanism, firmly say that a nazi post (even when dealing with
the subject) in a transhumanist forum is wrong in the same way:
nazism is fundamentally incompatible with transhumanism.
> Same with an ecumenical
> transhumanist organization devoted to the furtherance of human enhancement
> technologies - you agree to disagree on a lot of issues (like Rifkinites and
> the church when talking about reproductive medicine) but you work together
> on things you can agree on. If somebody's views are beyond the pale, you do
> not argue - you exclude them by fiat.
This is a great example on why core values are important. If you
were sitting in the same organization as Rifkin and the Church
and only trying to base your decisions on the view that humans
can technologically enhance themselves, you would never have any
arguments in favor of doing this other than some practical
benefits, and they would always have firm arguments why it was
unethical. People may agree to do something because it is very
practical, but if they at the same time learn that it is
unethical they will gladly give it up for something else
whenever possible. When somebody constantly undermines your
ethical position any real political progress will be temporary -
"OK, we will agree to your proposal to allow research on bionic
implants, but only for restoring health - the bill has to
include a provision saying that enhancements are unethical and
any research towards them is explicitely forbidden. Similarly,
for the reproductive cloning, it will be allowed after the
parents have undergone councelling with a number of genetic
councellors *we* select."
Of course one can work together with people with fundamentally
different core values - often it is necessary. But if you do not
have core values that reach beyond a liking for certain
technology you will find yourself subverted and your work
becoming used to further agendas far different from your own.
> ### If you truly believe eugenics is evil, exclude eugenicists, or if you
> can't, form an organization which specifically opposes that.
On what grounds? Because the word has a bad ring? Because you
say so? What makes "Transhumanists against Eugenics" more valid
than "Transhumanists against Democracy?". In your view there is
no reason to favor one over the other, except political
expediency. I would say that there is very good reasons to favor
the first, not just out of expediency but because coercive
policies tend to hinder human development and can easily be
abused, and that democracy in most of its forms are far more
humanistic than any other government.
> Second, and I think this is a more severe problem, is that trying to
> appeal to a lot of people by having more diluted values or ideology will
> mean that you get more people importing their own favorite ideologies
> into the movement and more people who don't get understand whatever
> central values there are and are more into it for the community or
> ### You can always have a restricted-entry organization for the special
> people, and a free-entry club for the proles. The two together can achieve
> more, without spoiling the experience for anybody (at least initially).
Inner and Outer circles seldom work, since they tend to get out
of sync (as well as the usual sociopsychological problems of
in-groups and out-groups, power struggles and "my conspiracy is
better than yours"). The "elite" may have the core values and
ideology, but without them the other organisation will start
drifting ideologically on its own.
A good example is the swedish liberal party Folkpartiet. When
asked, their chief ideologist gave us quite transhumanist and
libertarian answers to a number of questions. At the same time,
the politicians of the party have been protecting liberty by
supporting bans on behaviors that may be dangerous (like going
by car without seatbelt, harsh rules against drug use and
prostitution), more regulation of the markets in the interests
of equality and freedom (including the state monopoly on alcohol
sales), and supported many very restrictive laws on research in
genetics. There is a total discrepancy between the "inner party"
and the "outer party" that actually does the political work.
This is what not taking ideology seriously leads to.
> As an example, compare this list now with how it was several
> years back in terms of actual idea production: the huge increase in
> diversity of people on the list has not led to an increase in quality or
> even quantity of ideas. Some of the old issues of Extropy are composed
> to a large extent of threads from the list, with amazing insight and
> ### But then, the low-lying fruit has been picked already. How much more can
> you say about the right way of approaching the Singularity without new data?
> New arrivals cannot keep inventing the wheel all over again, and building a
> spaceship takes time.
Actually, many of the ideas that were discussed were hardly
low-lying fruits: transhumanist architecture, utility fogs, idea
futures, reputation systems etc.
And there is plenty of important stuff this side of the
singularity: how to shape societies, economies, institutions,
legal systems and careers to handle the predicted dynamic high
tech society? How to handle the problem of destruction from
replicating technology? What ethical, aesthetic and cultural
principles to promote in order to produce a transhumanist
society? How would a transhumanist society even work? - these
questions have hardly been developed, new data is arriving daily
and they ought to be high priority for all of us.
> Third, what use are the masses?
> ### Demonstrations, marches, security, letter-writing. Don't underestimate
> the value of a bunch of stalwart believers, even if their thinking is not as
> fine as their leadership's.
Having a lot of people supporting you is good if your ideas are
regarded as part of the serious discussion. That don't happen
because you have many supporters, but rather because your ideas
are shown to be relevant to current policies, have a cultural
and ideological impact and convince key people. *Then* the big
campaigns might be truly useful, but before that it is a blunt
weapon. Most elites regard million man marches as more relevant
in that they show that somebody has organisational ability than
in that there are a lot of people out there.
> But I question
> the point in having a million members in WTA or ExI if their membership
> doesn't *mean* anything. A far more successful strategy is to create a
> real world view (and a world-view is far more than a view of
> technology!), make the intellectuals and cultural leaders recognize it
> and then watch the mainstream move in your direction. It has worked
> well in the past.
> ### Why not take both roads?
I want to invest effort rationally. They payoff in the
intellectual sphere is far greater than in the mainstream
sphere. If I can convince one intellectual (in Hayek's sense of
the group of people in society that process and spread ideas),
then he will go on convincing other intellectuals and spreading
the idea to the masses. Intellectuals certainly listen to the
masses and do pick up ideas, but usually the memeflow is in the
other direction. This means that the same effort in spreading an
idea is likely to produce a far greater (and faster replicating)
return if I concentrate on the cultural leaders.
At the same time, a certain mix is beneficial since there is
also the issue of declining marginal effects of my effort. Using
one unit of effort produces (say) 1 units of intellectual memes,
two 1.4 units, three 1.7 units, four 2 units and so on
(sqrt(effort)). Meanwhile (say) preaching to the masses produces
half of that. In that case, the total result of x units of
effort on the intellectuals and T-x units on the masses gives me
sqrt(x)+.5*sqrt(T-x) units of result, which has a maximum at
x=(4/5)T units - spend most of your effort on the intellectuals,
but some on others.
Note that the above example assumed a fairly small difference in
effect between cultural leaders and the rest - if it is larger
the specialization becomes even more strong.
Ouch, I dislike talking about the "masses". Makes me sound like
a technocrat. I *love* talking to random people! But to have
maximal cultural impact you have to talk to the right people.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! firstname.lastname@example.org http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:35 MST