Re: (Ir)Relevance of Transhumanity in a Strong AI Future

Anders Sandberg (asa@nada.kth.se)
24 Jan 1998 00:40:03 +0100


DOUG.BAILEY@ey.com writes:

> What relevance does transhumanity have in a future where the strong AI
> hypothesis turns out to be true?

Fist, a semantic quibble: I think you misuse the term strong AI to
mean "superintelligent AI". The term already has a standard use in
discussions about AI where the strong AI hypothesis refers to the idea
that artificial intelligence is possible and implementable.

BTW, this is yet another incarnation of an old debate. Look in the
archives for more.

> The intuitive answers appears to be "very little". To the extent
> that we can create artificial minds that are more intelligent that
> the human mind, transhumanity would appear to have the same
> significance in such a world as trans-raccoonism has today.

I don't trust intuitive answers. I think you have fallen for what I
sometimes call the "cult of superintelligence": the entity that is
most intelligent will by necessity run the show. It is by no mean
obvious that this is true, even on earth today there are plenty of
species that couldn't care less about the doings of humans unless we
start to mess up the biosphere *strongly*. And it is not inconceivable
with superintelligent AI programmed or convinced to behave in a
humanocentric fashion, much weirder motivations and value systems are
possible.

> We do not discuss trans-raccoonism that much since even a
> transraccoon would still be less intelligent than a human. Put
> another way, in terms of future boundaries of possibilities it makes
> sense only to concentrate attention on the group that would possess
> the highest level of intelligence.

I think this is a mistake. One should concentrate on the group or
groups that have the most influence on events, regardless of their
intelligence level. Of course, SI makes a very likely candidate for
this, but one should not blithely assume that just because an AI is
superintelligent it would run the world (cf. Stanislaw Lem's _Golem
XIV_ where the SI had better things to do).

> I submit that such discourse is irrelevant. Does it matter what happens to
> humanity in a strong AI future? The raccoon example serves as a useful
> parallel. Is there significant (or any) discourse on the fate of raccoons (or
> any nonhuman biological lifeform) in a transhuman future (regardless of the
> variation)? Not that I am aware of. Why? Its irrelevant. What does it
> matter what happens to raccoons in the future?

There has been discussions about trans- and posthuman ecology. There
are plenty of us on the list who regard biological diversity as
desirable, and I seem to recall several discussions over the years I
have participated on this and similar lists on animal rights,
ecological modifications and the possibility of uplifting animals (not
to mention the extropian squirrels). It should be noted that there are
transhumanists who regard the fate of non-intelligent species as very
relevant. I think this shows that what values the dominant entities
will express is probably a very relevant question, and that some
conceivable value sets might well involve the wellfare of racoons.

> I invite others views on such a future and ways in which humanity
> (transhumanity) can maintain relevance. Then again, is "relevance"
> such a noble goal in the first place?

My partial answer is that, no, relevance is not neccessarily a noble
or important goal (it is an expression of our evolutionarily developed
tendency to value our survival and social freedom), but I seriously
doubt many on this list wouldn't do their best to retain their
relevance in the context they can fathom (note that we could become
irrelevant in a context we do not care about, like amount of memorized
decimals of pi (to use a silly example), which was very important for
some other group of entities).

However, I see one likely way for us humans and transhumans to retain
our relevance even in a strongly posthuman era. And that is to become
an integral part of the emerging superintelligences. After all,
intelligence amplification provides a technology with obvious
short-term benefits, and can in the long run be combined with ever
more advanced AI systems (in fact, it is a reasonable assumption to
guess that it is easier to develop human-supported AI than stand-alone
AI, and much more profitable).

If we play our cards well, our mental structure and values could
remain a part of the emerging superintelligences just as our brains
contain a trans-fish, a trans-lizard and a trans-marsupial in the form
of the spinal cord, limbic system and cortex. We cannot do without
them, but at the same time they have been subsumed into something
greater. I suggest the same fate for humanity: we will not be replaced
by the posthumans, we will become parts of the posthumans, their
seeds.

Hidden in my core
a human soul template.
Consciousness pearl

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y