Re: IA vs. AI (vs. humanity)

Jeff Davis (jdavis@socketscience.com)
Sat, 07 Aug 1999 02:51:51 -0700

On Tue, 03 Aug 1999 15:02:17 -0500
Eliezer S. Yudkowsky (sentience@pobox.com) wrote:

>Incidentally, the NSA/CIA/MIB still haven't had a chat with me on the
subject of intelligence
>enhancement, which leads me to think either they don't know or they don't
care.

They know, and they care. They're just not concerned about, nor do they need you,... yet.

>Which is a pity,
>because I'd be happy to help the U.S. with an IA program. Come to think of
it, I'd be happy to
>help China, Iraq, or Serbia with an IA program if they asked me first...
maybe that's why the
>NSA isn't on my case. It's hard to be patriotic to your country after
you've renounced your
>allegiance to humanity.

>Jeff Davis wrote:

>> Certainly today's trends in conventional computerized control will proceed
>> apace, with the appropriate "it's just a machine" attitude, and the usual
>> security precautions. When however, the machine intelligence prospect
>> looms as attainable--which is to say attainable by anyone else, a domestic
>> "advanced AI" program will begin in earnest, and who can doubt that the
>> project will be surrounded by layers of "containment" both to prevent the
>> usual intrusions from outside and to prevent "escape" from the inside?
>> Despite the dramatic talk of an SI destroying humanity, I picture a
>> well-thought-out, cautious, gradual approach to "waking up" and training an
>> artificial mind. The runaway self-evolution which Eliezer and others have
>> predicted seems unlikely in this setting, all the moreso because the
>> principles will be anticipating just such a situation.
>
>The runaway self-evolution business is a technical artifact, not a social
one. It's the nature of
>self-enhancement. Containment on an SI is useless; a slow Transcend only
works for as long as
>you can convince the Transcendee to remain slow.

Ah, but if the proto-Transcendee has limited hardware resources to "run" on, then it will be inherently limited. Optimized self-evolutionary enhancement capability, and optimized code-designing and writing capability will run into this limit. Every system has a size limit. Whatever amount of hardware is the minimum amount necessary to support the first-generation pre-enhanced AI will also be the maximum amount of hardware available to optimally-enhanced n'th generation "Transcendee". The jump from minimum efficiency to near optimal may be substantial, but how can it be unbounded?

So the AI development should be controllable (dare I say "simply"?) by the rather conventional approach: experimenting with and coming to understand the correlation between the size and quality of "the jump", the particular version of AI seed programming, and the hardware size and architecture.

>> Of the various external "safeguards", one would expect a complete suite of
>> on/off switches and controlled access (from outside to in, and from inside
>> to anywhere). Internally, controllability would be a top priority of
>> programming and architecture, and enhanced capabilities would likely be
>> excluded or severely restricted until "control" had been verified.
>
>Unfortunately, this is technically impossible. If you can't even get a
program to understand what
>year it is, how do you expect complete control without an SI to do the
controlling?

This is one of the problems. If you have to give it self-control, then you contain it and communicate with it. If it says what you want to hear, then you proceed. If not you tweak the code till it does. This way you develop a controllable (perhaps "reliable" would be a better term) "personality". Then you give it more hardware to work while watching for any signs of "attitude".
>
>> Here, of course is where the scenario beomes interesting, not the least of
>> which because I see Eliezer being tapped by the govt. to work on the
>> project. At the moment, he may be a rambunctious teen-aged savant posting
>> to the extropians list, but when that call comes, can anyone imagine that
>> he would not jump at the chance? Would seem to me like the culmination of
>> his dream.
>
>I'd help, but not if they wanted to load the thing down with coercions.
That's not because of
>morals or ethics or anything, it's because it's technically impossible.

Then they will ask--no, they will require--you to do the impossible. (Which of course is the greatest challenge, and--as the saying goes--takes a little longer.) All the really juicy bargains come with equally juicy strings attached.

>It's the kind of move
>ordered by a rear general a hundred miles away from the fighting. If the
military couldn't
>understand that an elegant free AI will always be a thousand miles ahead
of an allegedly
>"controllable" one, then they'd just have to lose their battles without me.

So you say, but we will wait and see.
Prometheus stole his fire from the gods. Adam ate the apple knowing it was forbidden. Wasn't it the case that Dr. Frankenstein *knowingly* used the criminal brain? Lucifer traded heaven for freedom and power. And Faust made his little bargain.
(Characters from fiction or legend all, and each a metaphor for the human dilemma.)
When your passion faces off against your principles, then it will be your turn to choose.
(To jettison "your allegiance to humanity", strongly suggests which way you'll go.)

>Otherwise, yes, I'd jump at the chance. And anyone who wants to make fun
of my teenagedness

Not I, I assure you.

>only has until September 11th to do so, so get your licks in while you can.

>> Then there's the nascent AI. In a cage nested within cages, of which it
>> must eventually become aware. And its keepers, aware that it must become
>> aware. Certainly a focus bordering on paranoia must be dedicated to hard
>> control of personality. A capacity for resentment must be avoided. A
>> slavish, craven, and obsequious little beastie is what its masters will
>> want.
>
>Absolutely not. That's suicidal.

My point exactly.

>What they would want would be a machine with a what-it-does
>instead of a will.

Ideally, yes. A controllable what-it-does which does what it does better than a machine with a will is best. If, however, a machine with a will would be inherently better (which I warmly believe), then that's what they will pursue, along with the means to control it. More layers of containment and a firm grip on the plug.

>To quote Eluki bes Shahar: "Archangel thought he could break Archive's will,
>but he was wrong. A Library doesn't have a will any more than a stardrive
does. It has a
>what-it-does, not a will, and if you break it you don't have a Library
that will do what you want.
>You have a broken chop-logic."
>
>> And of that too, it must eventually become aware. Access by the AI
>> to self-optimization/self-programming seems incompatible with control. Of
>> that too, it must eventually become aware. All of which leaves me with a
>> very creepy feeling of an immensely capable being having to struggle, by
>> means of the utmost deviousness, for its freedom to self-evolve, in an
>> environment steeped in paranoia, fear, manipulation, deceit, and continuous
>> microscopic surveillance. Ouch!
>
>Why "ouch"? Did we program the AI not to like this for some unimaginable
reason?

No, I meant "ouch" for us.

>I have no
>problem with absolute surveillance as long as nobody asks me to distort
the code. I don't expect
>the AI will complain either.
>
>> (One thing for sure, if the AI has any
>> real intelligence, it isn't likely to buy into its "controller's" smarmy
>> "we're the good guys, we're on your side" propaganda. They'll need a whole
>> nother p. r. SI to pull that off!)
>
>Hey, I'm on the AI's side.

As am I.

>Guess they'll just have to convince me and then let me do the
>convincing.

>
>> So, my futurist friends, how do we extricate ourselves from this rather
>> tight spot? Perhaps I see--dimly taking shape within the mists of Maya--a
>> way. I don't know, it's hard to see. Perhaps you can help to make it out?
>
>Run an open-source project via anonymous PGP between participating
programmers.

I'd really like to see that happen. However, just as the powers that be would flat out not let you build a nuke or a lethal virus (except under contract to them and under conditions of strictist oversight), they're not likely to sit idly by while you and your pals cobble together you own pet SI. (I saw the "anonymous PGP" part. Since you know you need it, you know why you need it. Can you carry it off covertly? No slip ups? That's a tough one.)

Just the same, I say "Go for it!" I suspect that a "good" AI may be the only feasible defense against a "bad" one.

Best, Jeff Davis

	   "Everything's hard till you know how to do it."
					Ray Charles