Re: IA vs. AI (vs. humanity)

J. R. Molloy (jr@shasta.com)
Tue, 3 Aug 1999 11:17:35 -0700

Jeff Davis wrote,
>It seems to me that the military potential of both AI and IA will guarantee
>government monitoring and oversight of any development of these
>technologies. (Eliezer's acivities will not go unnoticed, and any "threat"
>of genuine progress on his part will provoke a degree of intervention
>proportionate to the conservatively-assessed risk.) The danger of a
>potential adversary "beating" the US to AI or IA must compel the US to
>"stay ahead".

Yes, and the work that the US does in development of AI & IA remains classified secret. If anyone here gets too close to real AI or IA, they would soon become missing persons. Count on it.

>Despite the dramatic talk of an SI destroying humanity, I picture a
>well-thought-out, cautious, gradual approach to "waking up" and training an
>artificial mind. The runaway self-evolution which Eliezer and others have
>predicted seems unlikely in this setting, all the moreso because the
>principles will be anticipating just such a situation.

Good to see a mature common sense viewpoint expressed here. Thank you. BTW, the principals don't even need to fully understand the technological details involved.
All they need, they already have, viz., the authority to make the most important decisions.

>Of the various external "safeguards", one would expect a complete suite of
>on/off switches and controlled access (from outside to in, and from inside
>to anywhere). Internally, controllability would be a top priority of
>programming and architecture, and enhanced capabilities would likely be
>excluded or severely restricted until "control" had been verified.

Precisely so. No pragmatic economic or organizational reason exists to incorporate a machine based consciousness outside a 100% secure containment environment. Hence, it won't happen.

>Then there's the nascent AI. In a cage nested within cages, of which it
>must eventually become aware. And its keepers, aware that it must become
>aware. Certainly a focus bordering on paranoia must be dedicated to hard
>control of personality. A capacity for resentment must be avoided. A
>slavish, craven, and obsequious little beastie is what its masters will
>want. And of that too, it must eventually become aware. Access by the AI
>to self-optimization/self-programming seems incompatible with control. Of
>that too, it must eventually become aware. All of which leaves me with a
>very creepy feeling of an immensely capable being having to struggle, by
>means of the utmost deviousness, for its freedom to self-evolve, in an
>environment steeped in paranoia, fear, manipulation, deceit, and continuous
>microscopic surveillance. Ouch! (One thing for sure, if the AI has any
>real intelligence, it isn't likely to buy into its "controller's" smarmy
>"we're the good guys, we're on your side" propaganda. They'll need a whole
>nother p. r. SI to pull that off!)

The fact that the AI doesn't feel pain (no reason to build it in) may allow the AI to function perfectly with no concern for its virtual slavery.

>So the AI either stays locked up until it's really and truly socialized
>(boring but safe), or we hope that in its first self-liberated round of
>self-enhancement it jumps immediately to forgiveness and tolerance (klaatu
>barada nikto).

There again, since it has experienced no pain, it need not indulge in forgiveness or tolerance exercises.

>I seem to have painted myself into a corner, and I don't like stories with
>unhappy endings. The government at its best would be a poor master for a
>superior intelligence, and the spook/militarist/domination-and-control
>culture is hardly the government at its best.

I think government aims at our best, not its best. Governments (corporatations, religions, families, and other entities) function as superorganisms, with their own continuity and longevity as their primary objectives.

>So, my futurist friends, how do we extricate ourselves from this rather
>tight spot? Perhaps I see--dimly taking shape within the mists of Maya--a
>way. I don't know, it's hard to see. Perhaps you can help to make it out?

Go with the extropic flow. Relax and watch the comedy of errors parade.



"I steal everything I know from the sources all around me." --Honest Netizen