Re: Lanier essay of 2001.12.04

From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Dec 13 2001 - 08:30:29 MST


On Wed, Dec 12, 2001 at 11:20:05PM -0800, Samantha Atkins wrote:
>
> I share that fear considering what will happen if the current
> general society determines in detail what kind of human beings
> it wants for the next generation. Overall, given reasonably
> democratic free-choice this will result in the proliferation of
> a number of traits many today would consider desirable and the
> suppression of a number of others, including some arguably part
> of high intelligence and creativity.

What meaning do you put in the word 'society'? Do you mean it like many
swedes would use it, as 'the government', or in the wider sense of 'our
social environment'? In the first case, the solution is fairly clear:
make sure the government is not allowed to interfere with our
reproductive choices in any way. In the second case (which seems more
likely) you have the problem of claiming it determines what humans are
born *in detail*. Sure, there are prevalent ideals and values that
affect the decisions of parents, but they do not control them 100% -
given the diversity of modern societies we instead see that while most
people do like most others do, there are a sizeable fraction that behave
in other ways. While there are many pressures to conform, there are also
counterpressures for individuality and rebellion.

> In short, we are in a sort of Catch-22. We can't seem to get
> too much better without getting smarter and more capable and we
> can't get smarter and more capable without using certain
> technologies with more wisdom than we generally possess.

I think you are seeing things as a vicious circle, while I think it is a
positive feedback loop: if people can help themselves a bit with
technology, that will affect society in positive ways that will in turn
help develop better technology, and so on. There may still be unwanted
side effects, but in general society seems to be made up of a
sufficiently complex mixture of people and views not to get trapped in
dead ends very easily.

> > 2) the assumption that things will get worse if humans are allowed to
> > take responsibility for them. This is largely based on the ease we can
> > come up with bad scenarios and examples of mistreatments in the past,
> > while leaving out all the good things. One reason is of course that good
> > news are not very exciting, so they do not get trumpeted about as much
> > or written about in history books. But there is also an assumption here
> > that mankind is always fallen, and increased freedom always must result
> > in increased evil actions rather than increased evil and good actions.
>
> I don't assume things would get worse. I simply assume, based
> on a lot of observation, that some pretty screw things will be
> done with any powerful new technology, not just or even
> necessarily predominantly good things. There is reason for some
> caution and safeguards.

Sure. Note that my criticisms of the idea that we are socially better
off unable to change ourselves are not directed at you or anybody else,
but the idea itself.
  
> > 3) The assumption that the best way of handling this potential risk is
> > to abstain from a technology that could be bad in a repressive society,
> > rather than seek to deal with the repressive society. If we are worried
> > about sexism, maybe we should see what we can do about sexism in our own
> > society. If we are worried about cloning being used to create carbon
> > copy people, maybe the right way to handle it is to strengthen
> > individualism?
>
> I am not so much worried about a repressive society as about not
> especially wise or more or less rational or ethical human beings
> and human organizations wielding powers increasingly able to
> really screw us up in perhaps a terminal degree.

Then the question is: who can we allow to wield these technologies?

The relinquishment argument answers 'nobody', but that clearly fails
since it is enough for somebody to develop a technology and the genie is
out of the bottle.

The answer 'the government, since they are elected by the people' is
popular, but has the problem that even a democratic government may
misbehave due to public choice effects, lack of accountability and
especially its top-down, one-size fits all approach. The real danger in
centralized schemes is that there is a single point of failure
(accountability and power division lessens this problem a bit, but does
not wholly remove it), and a bad decision made will affect all.

Another answer to the question is 'the people themselves'. This has the
advantage of being a bottom-up, self-organizing approach, where local
information and individual values can be used by people. It also implies
that bad values or stupid people will affect technology use; whether
this is disastrous or not depends very much on the technology: if
individual people make bad reproductory choices it will affect them and
their families, while dangerous experiments with black holes in the
kitchen may threaten everybody. In situations where the effects of
misuse are limited and can be curbed by holding users accountable, this
approach seems preferrable to the government approach, and I think this
holds true for many reproductive technologies. It is less clear how well
it works for black holes.

There are general ways of making people behave more rationally and
ethically, such as accountability, transparency and setting up
institutions which, while fallible, help bias behavior in beneficial
ways and protect from the effects of mistakes (insurance companies, for
example). I think more thought on how such means can be applied in
reproduction would be helpful.
  
> > > Another (somewhat unrelated) scary thought experiment:
> > >
> > > People fear cloning/GE used to produce "designer humans" that
> > > are better than organic/natural humans. Has anyone discussed
> > > the fear of using cloning/GE as a weapon?
> >
>
> I do not worry about super-soldiers. I do worry about people
> become obsolete and no consideration given to their well-being.
> Whether they are made obsolete by genetic design of superior new
> humans or by AI or robotics or something else is immaterial to
> the basic concern.

Isn't the issue really lack of concern for their well-being, rather than
being made obsolete? If I become obsolete in some sense, it might be a
blow to my self-esteem, but if I can still grow and flourish as a human
being I am still well off.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:26 MDT