Re: AI and Asimov's Laws

Delvieron@aol.com
Wed, 24 Nov 1999 20:45:36 EST

In a message dated 99-11-24 18:55:51 EST, you write:

<< 'What is your name?' 'Delvieron@aol.com.' 'IT DOESN'T MATTER WHAT YOUR
NAME IS!!!':>>

No, my name is Glen Raymond Finney. And it does matter to me<g>. I've never been able to figure out exactly what this intro of yours means. One thing that I do know is, it is a little bothersome to be addressed by a yell. Now on to what really matters.

<<I strongly doubt that anything we'd call "intelligent" could be built in a non-upgrading manner.>>

I believe we may be thinking of different things when we use the term upgrading. I am talking about being able to change the physical parameters of how the "brain" works in order to improve function, as opposed to being able to add information and remember optimal strategies that are within the current parameters. Even humans are not yet able to really upgrade our intelligence....optimize it, yes, but nothing that would increase it substantially. If we did, then we would have less concern about bootstrap AIs, because we would be bootstrapping humans. Think about it. Has there been any improvement between humans today and, say, humans in Hellenic Greece?

<<Would you call a thing intelligent if it could not
change its own behavior in response to stimuli?>>

Nope. But I'm not saying that it couldn't change its behavior in response to stimuli, only that the range of behaviors could be constrained to a preset range. Heck, most humans are constrained in the kinds of behaviors they will generate based on personality traits. And it is very hard to modify personality in humans (not imposible, though, in that way we would have more flexibility than constrained AIs).

<<If it could not (at least apparently) revise its own beliefs in response to
what it observes?>>

I've unfortunately met some reasonably intelligent, closed-minded people in my life. It does tend to limit the full potential of their intellect, but does not change the fact that they are intelligent.

<<Imagine something like this trying to pass the Turing Test:

You: My favorite color is red. What's your favorite color? AI: My favorite color is green.
You: What's my favorite color?
AI: I don't know.>>

This is not what I envision as a non-upgrading AI. First, a non-upgrading AI would have little or no conscious control over its own programming. It could respond to environmental stimuli, formulate behaviors based on its motivational parameters, and implement those behaviors. This is basically what humans do. Technically, such an AI could possibly learn about itself, if creative enough figure out a way to improve itself, then find some tools and do it (if it could remain active while making modifications). This would be no different than you or me. However, it might never do so if we program it to have an aversion to consciously tinkering with its internal functions except for repairs. This would be in my estimation a non-upgrading AI.

Now then, an upgrading AI would likely start out with an intrinsic knowledge of its internal structure, maybe even be able to be conscious of how it processes information and be able to change internal architecture simply by willing it. This would be different from the way humans operate. And more importantly, the upgrading AI would have a motivational drive to improve its capabilities, at least in the seed AI (for, of course, the upgrading AI can and would consider altering all its functions even the drive to improve function).

<<Analogies to Alzheimer's patents aside, we can quickly see what sort of
limitations "non-upgrading" AIs would be under. We might, at best, hope to build some kind of non-upgrading idiot savant, but not an A*I*.

The I is important. ;)>>

Intelligence is important, but so is inclination and ability.

<<-Dan>>

'What is your name?' 'Dan' 'It doesn't matter what your name is...or does it?'

  <<     -unless you love someone-
     -nothing else makes any sense-
            e.e. cumming.  >>


BTW, great ee cumming quote.

Glen Raymond Finney