Re: >H RE: Present dangers to transhumanism

J. R. Molloy (jr@shasta.com)
Wed, 1 Sep 1999 22:30:35 -0700

From: Stan Kretler <skretler@infinitefaculty.com>
> Seems important to think about these things. Is anyone here worried
> about slipping into a psychosis of nihilism once their IQs are high
> enough that they realize that their goals have no philosophical
> foundation (or *might* have no foundation)?

Nihilism need not promote psychosis,
on the contrary, it might require certifiable psychotics to really take nihilism seriously.
See, for example, The American Nihilism Association: http://www.access.digex.net/~kknisely/nihilism.html

The job of promoting psychosis belongs more to religion and politics than to philosophy, IMO.

Human goals have biological foundations. Philosophy (as Justice Douglas has pointed out) merely serves to justify and rationalize those congenitally derived goals (it gives us reasons to do what we already want to do). When you understand that your goals have no philosophical foundation, the time has come to abandon philosophy for a more honest approach, a more direct experience of reality.

> Worrying about grey goo seems less important (and less interesting) than
> worrying about 400 IQ nihilists with a *lot* of technological power.

Oh, you mean _those_ nihilists.
But, do they pose more of a threat than religious fanatics and demagogic politicos?
I think psychosis constitutes the real problem here. That, and something we might call MSGI, or mass socio-genic illness. An insane humanity can't do much to protect itself from runaway technology... or from anything else for that matter. So, what can extropians do to bring Homo sapiens to its senses, and to resolve social conflicts?
It doesn't feel very consoling to note that no one else has ever managed to solve this problem either.

> If this is an old topic, perhaps someone could point me to a place where
> it has already been discussed.

Let's not go back as far as Plato's _The Republic_ (I had the flu that week).

The Singularity will have to do what the cultural axis of 2,500 years ago (Persia, Egypt, China, et al.) could not: preserve civilization until Homo sapiens can transcend its own congenital (and quasi-diabolical) selfishness.
But I agree with Singularitarians who suspect that an SI might improve all life by culling humanity from the galaxy.
That in itself rather endears the Singularity to me when I reflect on how humanity has treated such as Galileo, Socrates, and even Buddha (who would have got kicked out of India but for his dad's royal status).

If an SI decides who shall live and who shall die, that should disturb me no more than the blind and stupid caprice that has reigned since long before philosophers and Shakespeare identified it. I think the real danger to transhumanism resides in the territorial imperative of tribalistic technophiles and technophobes alike. I treasure the "naivete" of techies because it indicates the kind of innocence necessary to overcome our common burden of socio-biology (in the throes of which Homo sapiens devises nihilism, religiosity, and ideology to cover its tribal blood lust).

Scientists and engineers may find it difficult to build or evolve an SI so lacking in compassion and honesty as the whole ugly history of Homo sapiens has shown itself. (Five thousand wars in the last three thousand years.) Apparently, our roots go back to Cro-Magnon killer apes who slaughtered the peaceful Neanderthals. The present dangers to transhumanism parallel the present dangers to civilization itself, viz., sufficient biological, chemical, and nuclear weapons of mass destruction to kill us all several times over, along with insufficient wisdom to deter their use.

"Who but a lunatic could remorse the passing of such a wretched world." --Alligator Grundy