Re: IA vs. AI was: longevity vs singularity

phil osborn (philosborn@hotmail.com)
Sat, 31 Jul 1999 14:47:01 PDT

It is interesting to note that the assumptions which you do not share are implicit in much of Drexler's work as well as fellow Foresight conspirators such as Halperin. The issue of whether a SI would be hostile or friendly or merely casually indifferent is certainly something worth discussing. I absolutely reject the silly assumption in both Engines of Creation and Halperin's novels that somehow we are going to be able to keep a lid on AI. (In fact, according to Conrad Schneiker - co-inventor of the scanning/tunneling array chip - Drexler also originally thought that nanotech itself could be kept under wraps and carefully developed under the wise and beneficent supervision of MIT technocrat types. It was only, according to Conrad, when he threatened to publish on nanotech himself, that he managed to force Drexler's hand on this, which resulted in Engines,.. Foresight, etc.)

I do have an answer for the problem at hand, however, which I have been passing on for the past twenty years, to wit:

While there are inherent economic efficiencies in cooperation, and the universe does set upper limits on the size of a practical single-focus intelligence, there are even more challenging issues for a really smart AI. You yourself have undoubtably run head on into the problem of achieving "visibility." The more out on the end of the bell you are, the less likely you are to find a soul mate.

Why is this a problem? From a strictly practical view there is a real problem for any intelligence called maintaining sanity. Intelligence - that is, conscious intelligence as opposed to very fast and sophisticated data processing - is structured in such a way as to be inherently open-ended. What keeps it focused and sane is the necessity of constant feedback from the real world. In isolation, you gradually go insane, and the mere effort to maintain sanity without feedback could become overwhelming.

This itself imposes a major overhead cost at minimum for an isolated intelligence. From the standpoint of enjoying one's existence, the cost becomes much higher, as enjoyment in general flows from completing the feedback loop - what one percieves reaffirms the structure and content of one's consciousness. Of course, we could eliminate this factor by design, but I don't think what would be left would be conscious or alive.

This is far from a new idea. The Hindus several thousand years ago speculated quite freely about the ultimate origins and structure of the universe, which they assumed had existed for an infinite time. In an infinite time, anything that can evolve will. Thus, we might expect that the highest product of evolution was already with us and always has been - that being the Hindu idea of God. This God plays with the universe in order to percieve himself. Yet the Hindus recognized the problem of power and feedback. Thus, their God - which is a reasonable approximation of an SI - gave independence to his creations. He built worlds and let them run, and if he did well, then the creatures that evolved would glorify him by their existence - i.e., perceptually reaffirm.

It is only when you have a perceptual/emotional relationship with another consciousness in real-time that you can get a real mirror of your "soul." This is why people place such a high value on relationships.

The real problem, as I identified in the early '80's, is the transition period. Suppose we build something that is very, very smart and has some very strong "motivations," but is not really conscious. Then we have to convince this entity that it is in its own best interest to take the next step of moving to true consciousness. At that point, we have an ethical hold on it, as ethics is the bottom line rules of how we live together safely enough that we are able to be honest.

>From: Max More <max@maxmore.com>
>Reply-To: extropians@extropy.com
>To: extropians@extropy.com
>CC: max@maxmore.com
>Subject: Re: IA vs. AI was: longevity vs singularity
>Date: Wed, 28 Jul 1999 09:55:41 -0700
>
>At 07:30 PM 7/27/99 +0200, den Otter wrote:
> >
>
> >> You're staking an awful lot on the selfishness of superintelligences.
> >

>
>I find this puzzling. By the same reasoning, we should want to keep
>children uneducated and ignorant, since they will become competition for
>us. Both assume that more people with intelligence and ability come at a
>cost to those who already have it. A zero-sum assumption. Clearly the
>economy does not work this way. Right now, most of Africa has less wealth,
>education, health, and technological ability than the Americas and Europe.
>I would see Africa's ascendence not as a competitive threat but as a
>massive contribution to the total output of ideas, goods, and services.
>
>Why should SI's see turning humans into uploads as competition in any sense
>that harms them? It would just mean more persons with whom to have
>productive exchanges.
>
>This must be where we differ. No, I don't think total control is desirable
>or beneficial, even if it were me who had that total control. If true
>omnipotence were possible, maybe what you are saying would follow, but
>omnipotence is a fantasy to be reserved for religions. Even superpowerful
>and ultraintelligent beings should benefit from cooperation and exchange.
>

>
>Despite my disagreement with your zero-sum assumptions (if I'm getting your
>views right--I only just starting reading this thread and you may simply be
>running with someone else's assumptions for the sake of the argument), I
>agree with this. While uploads and SI's may not have any inevitable desire
>to wipe us out, some might well want to, and I agree that it makes sense to
>deal with that from a position of strength.
>
>
>I'm not sure how much we can influence the relative pace of research into
>unfettered independent SIs vs. augmentation of human intelligence, but I
>too favor the latter. Unlike Hans Moravec and (if I've read him right,
>Eliezer), I have no interest in being superceded by something better. I
>want to *become* something better.
>
>Max



Get Free Email and Do More On The Web. Visit http://www.msn.com