Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Sep 29 2000 - 02:52:46 MDT


J. R. Molloy writes:

> Smart but poor people go to work for rich dumb people all the time. Henry Ford
> (or was it Edison?) reportedly said that the key to his success was hiring
> people who were smarter than him.
 
Do you work for rabbits? For daisies? What can you give a >H AI what
the >H AIs can't get for themselves?

And why do I have to repeat that argument? Isn't it obvious?
 
> Are we both projecting inner feelings onto our fantasy creatures?
 
No. I want them to be friendly just as you. However, in my
extrapolations I attempt to preserve a trace of rigour, relying on a
few assumptions: limits to computational physics and persistance of
evolution (imperfect reproduction in face of limited resources).

I don't know on what assumptions you base your extrapolations, because
you have never stated them explicitly. All I see is that you pick out
facts suiting you and ignoring others, shifting positions on a post to
post basis. In case you're wondering, this does not exactly lend
credibility to your arguments. This is also generating a lot of noise
on the list, so I will not continue this indefinitely.
 
> > Unfortunately, the method which makes the AI powerful automatically
> > makes it less than friendly. If it's friendly, it's less than useful.
>
> I don't see that at all, perhaps because I have no use for unfriendly genius.

Of which relevance is this to the issue at hand? I don't have use for
idiots nor haemorrhagic viruses either, nevertheless there are
sufficient amounts of them out there. Evolution must love them, 'cause
it made so many of them.

> Then again, the perfect intelligent robot would be friendly to its owner and
> unfriendly toward others, rather like a guard dog.

Then again, why not a bandersnatch.

Everybody knows a unicorn's horn can scratch steel, and it can be made
docile when it will lay its head in a virgin maiden's lap. So let's go
hunt some unicorns, there must be some out there in the foothills.
 
> So, what is your position? You think roboticists should be forbidden to make
> robots that are too smart?

I think codes directly mutating machine instructions (including
virtual machine instructions) using evolutionary algorithms should be
considered dangerous, and hence regulated. Enforcement should receive
progressively higher priority as resources in single installations go
up thanks to Moore. AI@home type of projects using above technologies
should be outlawed. Research facilities using above techniques should
be permanently reviewed on how they handle data carrier (regardless
what is on that carrier) and strict offline quarantine. Data carrier
traffic must be one-way: from the outside to the inside only. This
means the cluster you do research with must be located in a physically
secure permanently offline facility. Decomissioned components must be
destroyed onsite (e.g. hard drives and flash memories using
thermite). Etc. etc. etc.

Notice that above is not designed to contain AI, just to contain
infectious code you generate during AI research. You definitely do not
want that roaming the networks of near future without artificial
immune systems.

I don't think physical solutions for fragmenting the Net are globally
enforcible (and also are likely to be misused), and hence I'm not
suggesting this here.

Research into engineered biological pathogens and free-environment
(this includes space) capable molecular autoreplicators should also be
similiarly regulated. I would personally think best locations for
these would way be outside of Earth's gravity well, in a really really
good containment. Something which you could rapidly deorbit into the
Sun would seem a good place. (A nuke would be probably not safe).

Whether it will be any good is open to question, but at least you'll
be plugging some holes and buying precious time. Reducing the amount
of potential nucleation sites reduces the probability of nucleation
event over a given period of time, as long as it doesn't bring people
on stupid ideas. (Many teenagers would be thrilled to do classified
research from their bedrooms).

If I knew someone is about to succeed in building a >H AI or a gray
goo autoreplicator before we're ready for it, despite an universal
moratorium I would nuke him without a second thought.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:23 MDT