Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Sat Sep 30 2000 - 00:58:44 MDT


Eugene Leitl wrote:
>

> I think codes directly mutating machine instructions (including
> virtual machine instructions) using evolutionary algorithms should be
> considered dangerous, and hence regulated.

WHAT!? Common optimization techniques mutate machine instructions.
Many types of system write and execute code dynamically. A code
improvment tool might be consider some kind of dangerous thing by this
stricture. How do you delimit this? And do you really want to? So
much more than simple code rewriting is necessary to get anywhere near a
dangerous level of AI.

I have in mind as a someday RSN project something that takes source code
and applies series of refactoring rules and design patterns and such to
it in an evolutionary manner to produce code that is "better" along a
set of interesting dimensions. Under your scenario, this would severely
regulated. Have I got that right?

> Data carrier
> traffic must be one-way: from the outside to the inside only. This
> means the cluster you do research with must be located in a physically
> secure permanently offline facility. Decomissioned components must be
> destroyed onsite (e.g. hard drives and flash memories using
> thermite). Etc. etc. etc.
>

Way too heavy for all but the most intense and germane to major AI types
of research.

> Notice that above is not designed to contain AI, just to contain
> infectious code you generate during AI research. You definitely do not
> want that roaming the networks of near future without artificial
> immune systems.
>

Well, we have things like agents roaming the web right now and some
tasks are not very tractable without them. Some of them can even fire
off new agents. Would that be covered in your rules?

 
>
> Research into engineered biological pathogens and free-environment
> (this includes space) capable molecular autoreplicators should also be
> similiarly regulated. I would personally think best locations for
> these would way be outside of Earth's gravity well, in a really really
> good containment. Something which you could rapidly deorbit into the
> Sun would seem a good place. (A nuke would be probably not safe).
>

Regulated by whom? By your average politicos? This is worse than the
danger. You would think that at least off planet research would not be
so strict.
 
> Whether it will be any good is open to question, but at least you'll
> be plugging some holes and buying precious time. Reducing the amount
> of potential nucleation sites reduces the probability of nucleation
> event over a given period of time, as long as it doesn't bring people
> on stupid ideas. (Many teenagers would be thrilled to do classified
> research from their bedrooms).
>

Actually many quite mature programmers (myself included) are not at all
keen to have code rewriting software locked down.

> If I knew someone is about to succeed in building a >H AI or a gray
> goo autoreplicator before we're ready for it, despite an universal
> moratorium I would nuke him without a second thought.

I thought you understood that moratoriums do not work.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:28 MDT