RE: Why the future won't accommodate everyone!

From: Sarah Marr (sarah.marr@dial.pipex.com)
Date: Fri Mar 17 2000 - 09:45:00 MST


Mike Lorrey wrote, in reply to my post:

> They own it [stock] whether they know it or not...

Perhaps, but I don't think that indirect ownership acts to effect the
suggested benefits of the original post; i.e. it doesn't provide freedom
from governmental interference.

> Since increased artificial mind augmentation will add increased
> logical abilities to
> the average person, then the amount of illogical reasoning will
> by definition
> decline...

That doesn't follow, Mike. Increased logical reasoning does not
automatically decrease illogical reasoning. For logical reasoning to 'win
out' it has to be considered favourable to that of illogical reasoning, and
there's no guarantee that augmentation will or, indeed, should, bring that
about.

> So long as productivity growth and resource availability increase
> faster than
> population growth, then we remain on the track toward relatively
> unlimited resources.

But not to unrestricted and equal access to those resources for all, which
was my point.

> The market will decide what is good and what is bad.

No, it won't. It will merely decide what the majority consider good and bad,
as it does now. You can't claim universal 'goodness' or 'badness' for
something.

> Individuals,
> as they gain more
> logical, computational abilities, will understand more and more
> that power domination
> strategies are not beneficial to one's true self interest.

Why wouldn't they be? Note, I'm not asking why _are_ they in one's self
interest, I'm asking why they are actively _against_ that best interest.

> > Oh, I don't know. Perhaps 'guilt' and 'loneliness' will
> be considered
> > negative, 'bad', emotions, and someone will have decided to
> wire them out,
> > as you suggested.
>
> Anyone who thinks that eliminating emotions is a good thing has
> some issues to work
> out. Being transhuman does not mean less human, it means more
> human, better human. I
> don't see how you can acheive this by subtraction.

I agree with you one hundred percent: that was the point of my comment.

> > We are not about to overcome all future suffering, and
> the removal of
> > death, and resurrection of the dead, will not end all evil.
>
> If our projections of the future are reasonably accurate...
<snip>
> I estimate that this means about a 90% decrease in the fatality rate.

Sorry, my comment wasn't clear. I should have written, "We are not about to
overcome all future suffering. Furthermore, the removal of death, and
resurrection of the dead, will not end all evil." I do think removal of
death is entirely possible, but that won't end evil or suffering. Why should
it?

> Nobody who is not frozen can truly be resurrected...

I agree.

> Many, but not all people, are happy to die. Many, but not all
> people are not happy with
> their lives... see a resemblance?

Yes, but not a one-to-one mapping by any means.

> However, there are things that
> all sane people do
> agree on. Those things can be defined as 'good'.

There are very few things that _all_ sane people agree on, and you are in
grave danger of creating a circular definition of sanity, whereby those who
are to be considered sane are those who agree on certain things.

> Also, due to evolutionary fitness,
> there are things which are more good or more bad than others.

This is, at best, a restrictive definition of 'good' and 'bad'. "Good" and
"bad" go far beyond evolutionary fitness. For an assessment of something's
being 'good' or 'bad' to rest on its evolutionary value all other
considerations (cultural, societal, moral...) would have to be indeterminate
or ignored. For me, that's a 'bad' thing.

Sarah

       ------- ------------------------------------------------------
       w w w. | supra Lunam sunt aeterna omnia (Cic., 'De Republica')
       seraph | http://www.seraph.org
       .o r g | sarah.marr@dial.pipex.com



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:33 MDT