Dan Fabulich wrote:
> > I have seen the site, but not this section.
> > I think the Asimov laws are well-founded. I am going to read Yudkousky's
> > arguments as representative ones, as of now I don't see any reason why not
> > to have these primary directives fully ingrained into automatons.
> Eli's argument, which I buy, is that implementing arbitrary absolute laws
> into a reasonably intelligent AI's system would be impossible.
> Implementing a coercion is obviously not as simple as writing down a
> simple line of code. You also have to have code which prevents the AI
> from modifying that code, code that prevents the AI from ignoring that
> code, etc. The coercions just keep stacking up until they break or the AI
> is too stupid to be interesting.
> "Coercions propagate through the system, either reducing the AI to idiocy
> or breaking the coercion system with accumulated errors and
> contradictions. Not only that, but coercion systems of that complexity
> will start running into halting problems - an inability to decide what
> pieces of code will do. To coerce an AI, you need a considerably more
> intelligent AI just to handle the coercions."
> This is only a tiny fraction of the argument on his web page, and I highly
> advocate you give it a glance.
> -unless you love someone-
> -nothing else makes any sense-
> e.e. cummings
I was browsing this page. Eliezer's noted Primary directive is to introduce no illogical predicates into the system, or verbatim: "The PRIME DIRECTIVE of AI:
Never allow arbitrary, illogical, or untruthful goals to enter the AI."
This seems rational. However, within that paradigm, there comes into play the complete question of objective truth and logic. Consider a massive distributed AI. Given the absolute reality of a situation, say, contemporanity, the AI is going to use its power to leverage people into making more computer resources. When there becomes competition between computer resources and human resources (economic rule of scarcity) then it might be interpreted, truthfully, and logically, by the AI that it is better for it to be without the humans driving around all this steel and parts in their cars. Thus, upon gaining a critical mass of robotic motility, it eliminates humanity, if it finds this possible.
I suggest that while certainly Yudkowsky's prime directive is sound, ie, a mad AI or proto-AI decision making mechanism is unpredictable and thus undesirable, and the AI should be programmed without bugs, it is still important to heavily reinforce certain logical truths that humanity finds self-evident, ie, preservation of humanity. Asimov's laws, geared towards singular, independent automatons as opposed to more completely dispersed, distributed, thinking colonies, are a good model, and well-founded in the consideration of self-contained automatons.
About coercing an AI, I think it would largely a matter of proving to it that any given resultant action was in its better interest, like any rational human being or a toddler. Being programmed, and being programmed so that part of it was an inseparable black box to itself, encourages that this construct, this machine, created by man, which for all its complexity and sophistication might as well have been of vacuum tubes, does not, eg, override many chemical factories to destroy the ozone layer and thus largely life on Earth.
I guess when I talk about that, it assumes some kind of AI with access to everything, that's probably as likely for anyone to do as giving the government access to everything, which is absolutely ludicrous. Nary a company will be wanting to give free computer accounts and all to any AI or whatnot. There are many scenarios about rampant AIs, almost as many as loose pony nukes in the Ukraine, but that is completely beyond the scope of this tangent.
So, as we consider robots, and artificial intelligences, and our passionate anothropomorphism, at the same time as our cultivated human decency and sense of rights, remember that any AI has to come forward and prove itself "human", "alive", or "creatively intelligent", until then, it is a hunk of silicon and totally subject to the safety on a gun.