Re: AI and Asimov's Laws

Dan Fabulich (daniel.fabulich@yale.edu)
Wed, 24 Nov 1999 22:08:28 -0500 (EST)

'What is your name?' 'Delvieron@aol.com.' 'IT DOESN'T MATTER WHAT YOUR NAME IS!!!':

> I've never been able to figure out exactly what this intro of yours
> means.

<shrug> It amuses me. :)

> I believe we may be thinking of different things when we use the term
> upgrading. I am talking about being able to change the physical parameters
> of how the "brain" works in order to improve function, as opposed to being
> able to add information and remember optimal strategies that are within the
> current parameters. Even humans are not yet able to really upgrade our
> intelligence....optimize it, yes, but nothing that would increase it
> substantially. If we did, then we would have less concern about bootstrap
> AIs, because we would be bootstrapping humans. Think about it. Has there
> been any improvement between humans today and, say, humans in Hellenic Greece?

Actually, I don't think we mean different things when we use the term "upgrading," but rather different things when we use the term "laws." In Asimov's short stories, the robot CANNOT break the rules. It's not that a robot generally tends not to break the rules, with the exception of a few psychopaths and criminals.

Indeed, your analogy to humans is telling, because, while humans do generally tend not to murder one another (though even this may be argued), many many of them still do anyway.

Sure, we could probably program in a tendency to follow the laws, but probably not a very strong one, and NO tendency that all or even most individuals couldn't work around given only a tiny bit of thought.

Think we can't overcome our instinct to reproduce? Think contraception.

Or maybe you meant idiot savants after all?

> BTW, great ee cumming quote.

thx

-Dan

-unless you love someone-
-nothing else makes any sense-

e.e. cummings