Re: Lanier essay of 2001.12.04

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Dec 13 2001 - 13:21:42 MST


Anders Sandberg wrote:
>
> On Wed, Dec 12, 2001 at 11:20:05PM -0800, Samantha Atkins wrote:
> >
> > I share that fear considering what will happen if the current
> > general society determines in detail what kind of human beings
> > it wants for the next generation. Overall, given reasonably
> > democratic free-choice this will result in the proliferation of
> > a number of traits many today would consider desirable and the
> > suppression of a number of others, including some arguably part
> > of high intelligence and creativity.
>
> What meaning do you put in the word 'society'? Do you mean it like many
> swedes would use it, as 'the government', or in the wider sense of 'our
> social environment'? In the first case, the solution is fairly clear:
> make sure the government is not allowed to interfere with our
> reproductive choices in any way.

Insufficient if the vast majority of free agents make choices
that
are destructive enough of what we value and is needed to
advance.

> In the second case (which seems more
> likely) you have the problem of claiming it determines what humans are
> born *in detail*. Sure, there are prevalent ideals and values that
> affect the decisions of parents, but they do not control them 100% -
> given the diversity of modern societies we instead see that while most
> people do like most others do, there are a sizeable fraction that behave
> in other ways. While there are many pressures to conform, there are also
> counterpressures for individuality and rebellion.

Sure. But I still have some bit of worry that I believe is
valid given
general individual and group lack of foresight and wisdom.

>
> > In short, we are in a sort of Catch-22. We can't seem to get
> > too much better without getting smarter and more capable and we
> > can't get smarter and more capable without using certain
> > technologies with more wisdom than we generally possess.
>
> I think you are seeing things as a vicious circle, while I think it is a
> positive feedback loop: if people can help themselves a bit with
> technology, that will affect society in positive ways that will in turn
> help develop better technology, and so on. There may still be unwanted
> side effects, but in general society seems to be made up of a
> sufficiently complex mixture of people and views not to get trapped in
> dead ends very easily.
>

Except that if we make sufficient mistakes of a serious enough
kind
then of course the "game" is simply over. I have less faith in
current
humans being both capable and balanced enough to use technology
wisely
enough to avoid catastrophe. Given that, we have no choice but
to
go on and attempt to balance the boat and provide a bit of
steering where
we can.
 
> > I don't assume things would get worse. I simply assume, based
> > on a lot of observation, that some pretty screw things will be
> > done with any powerful new technology, not just or even
> > necessarily predominantly good things. There is reason for some
> > caution and safeguards.
>
> Sure. Note that my criticisms of the idea that we are socially better
> off unable to change ourselves are not directed at you or anybody else,
> but the idea itself.
>

I didn't take it personally. Nor is my remark meant as just my
personal view.
I believe there are legitimate reasons for concern that weren't
so clearly
acknowledged in the former post.

> > I am not so much worried about a repressive society as about not
> > especially wise or more or less rational or ethical human beings
> > and human organizations wielding powers increasingly able to
> > really screw us up in perhaps a terminal degree.
>
> Then the question is: who can we allow to wield these technologies?
>
> The relinquishment argument answers 'nobody', but that clearly fails
> since it is enough for somebody to develop a technology and the genie is
> out of the bottle.
>

It also fails as some of these technologies are essential to
allow us to
solve problems perhaps insoluble without them that are crucial
to our
(all humans) well-being.

 
> The answer 'the government, since they are elected by the people' is
> popular, but has the problem that even a democratic government may
> misbehave due to public choice effects, lack of accountability and
> especially its top-down, one-size fits all approach. The real danger in
> centralized schemes is that there is a single point of failure
> (accountability and power division lessens this problem a bit, but does
> not wholly remove it), and a bad decision made will affect all.
>

It is certainly true that governments are not more trustworthy
than individuals.
 
> Another answer to the question is 'the people themselves'. This has the
> advantage of being a bottom-up, self-organizing approach, where local
> information and individual values can be used by people. It also implies
> that bad values or stupid people will affect technology use; whether
> this is disastrous or not depends very much on the technology: if
> individual people make bad reproductory choices it will affect them and
> their families, while dangerous experiments with black holes in the
> kitchen may threaten everybody. In situations where the effects of
> misuse are limited and can be curbed by holding users accountable, this
> approach seems preferrable to the government approach, and I think this
> holds true for many reproductive technologies. It is less clear how well
> it works for black holes.
>

Self-organization works whether there is sufficient leeway for
an organic
balance to evolve. I am not always so sure there is that much
leeway. If
enough people make bad choices about say, designing the next
generation, then
the next generation is more deeply screwed up than this one.
The effect does
not end at the supposed border of the family.
 
> There are general ways of making people behave more rationally and
> ethically, such as accountability, transparency and setting up
> institutions which, while fallible, help bias behavior in beneficial
> ways and protect from the effects of mistakes (insurance companies, for
> example). I think more thought on how such means can be applied in
> reproduction would be helpful.
>

How exactly will you make people accountable for unintended and
unpredictable
consequences? How do you make insurance rational rather than a
statistical
greed machine as it often (and unfortunately) becomes today?
 
> >
> > I do not worry about super-soldiers. I do worry about people
> > become obsolete and no consideration given to their well-being.
> > Whether they are made obsolete by genetic design of superior new
> > humans or by AI or robotics or something else is immaterial to
> > the basic concern.
>
> Isn't the issue really lack of concern for their well-being, rather than
> being made obsolete? If I become obsolete in some sense, it might be a
> blow to my self-esteem, but if I can still grow and flourish as a human
> being I am still well off.
>

Yes. That is my concern. The old "nature red of tooth and
claw" memes would
lead to those who are less efficient in the new landscape not
being of
concern at all. In my opinion we can do a lot better than that
and have
the opportunity to.

- samantha



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:26 MDT