On Fri, Nov 09, 2001 at 08:07:48AM -0500, Eliezer S. Yudkowsky wrote:
> Anders Sandberg wrote:
> > I'm not discussing the posthuman world here, but the world of tomorrow.
> > While people do worry about scenarios where superintelligent grey goo
> > from the stars eats humanity, the most powerful impact is with the idea
> > that within a few years you *must* get yourself a better brain in order
> > to avoid becoming poor and powerless. That slots neatly into a lot of
> > political pre-programming and gives us a ready-made organized political
> > resistance to our projects. To worsen things, most people are used to
> > ideologies that prescribe the same behavior for all humans, and do not
> > accept the idea that other humans may have other goals.
> The real answer here is simply "Are these technologies self-replicating?"
> Non-self-replicating technologies will likely be expensive and limited to
> the First World, at least at first.
You mean like the cellular phones, which are spreading across Somalia
> Self-replicating technologies are not
> expensive unless an enforceable patent exists. Genetic engineering is
> self-replicating but patented. Software is self-replicating but
> unenforceably copyrighted. Nanotechnology is self-replicating at
> sufficiently advanced levels. Intelligence enhancement technology is
> "self-replicating" if there's even one philanthropist among the enhanced,
> although it may take a while for that process to complete. And the
> nanotech-plus-Friendly-SI scenario is not only self-replicating, it
> bypasses the existing economic system completely.
> I think that's the counterslogan: "Posthumanity will bypass the existing
> economic system and offer everyone the same opportunities." Then you have
> to argue it. But it makes a good opening line.
What about "God will bypass the existing economic system and make you
all rich and equal" ? It is the same argument, and will get roughly the
same reception (perhaps even a bit more favorable than the posthuman
one). While there might be a good way of arguing it without having to
rely on too much handwaving and faith, it is still an extreme uphill
battle. Meanwhile those who think that genetic engineering, AI, nanotech
and all other new tech will make the current inequalities *worse* sit on
the top of the memetic hill, with a ready made paradigm that many people
buy into - consciously or not - and can easily throw down stones. Take a
look at _The ETC CENTURY: Erosion, Technological Transformation and
Corporate Concentration in the 21st Century_
(http://www.rafi.org/web/docus/pdfs/DD99_1-2.pdf) for an example - this
is the kind of views that is getting more and more respected in many
international organisations close to the UN and major environmental
conferences. The trend we observed when writing our book on the
genetics debate was that views that once belonged to the nutty red-green
fringe has become not just acceptable but accepted by many politicians -
thanks to a long and consistent campaign of connecting these views with
current issues but with the long term-goal of influencing the
ideological climate of the future.
Honestly, I think your argument shows a serious error many
transhumanists do. We assume that since various future technologies may
fix our current problems (although what happens if self replicating
systems end up patented and under the control of some groups?) the
current concerns are not that important. Which is terribly, sadly wrong.
The current situation will *shape* what technologies become developed
and for what uses they will be developed. If we just ignore current
concerns with "in nanotopia sexism/racism/poverty/death/whatever will no
longer be a problem" we will 1) make people think we are total airheads
with no connection to reality, 2) decrease our ability to influence the
important decisions about the future since we marginalise ourselves and
3) leave the field open to those who have opposing long-range views but
tie them to the current situation.
RAFI wants centralized international control over nanotech (and they are
moving towards putting it on the 2002 environmental agenda!) in order to
prevent a corporate dystopia - if they start to think AI can get
somewhere they will start to work on the same thing, and likely get a
lot of support from other groups who for their own reasons think that AI
would be a bad thing. Even unenforceable laws have dangerous effects,
especially when combined with ideas that certain things must be
prevented at any cost.
I think far too many transhumanists are ignoring the interface between
transhumanism and the real world, and this weakens transhumanism
tremendously. We need qualified analysis of where we are, where we want
to go and how to go about it if we want to convince anybody else - as
well as get our own thinking into shape.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! email@example.com http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:18 MDT