> I would argue that I can program/evolve an AI agent to screen
> SPAM (a psuedo-AI activity) and it would have NO motivation for
> domainating me.
> The *fundamental* question is *what* is "intelligence"?
> If it is the ability to re-write your own program than I would
> argue that *most* people in the world today are under the mark!
> If it is the ability to walk, and talk, etc. "like" someone
> generally considered to be "intelligent" then computers may
> very well ascend to that level with *simulations* of the
> motivations that humans have. So long as the expression
> of those motivations is constrained we are relatively safe.
I think this misses the distinction between a clever but domain specific, single purpose algorithm (like a SPAM screen) and Ai. I'm sure you could develop something that could do a pretty good job based on word frequency, sender's address, use of capitalization, and so forth. This is Eliza style stuff though, you're not doing Ai. Talking about it's motivation is misguided. I don't see how there's any analogy here to systems capable of learning from experience and acting autonomously and effectively across a broad range of novel situations.
The question of fundamental nature of intelligence is, in my view, the central question of Ai research. It isn't the ability to rewrite your own program - though this might turn out to be a useful implementation technique. And it isn't any particular task, like walking or talking. It goes far beyond this to the ability to learn completely new skills and develop behaviors totally unanticipated by it's designer.
>> Eliezer's pointed out the incoherence of believing you can hard wire
>> high level beliefs or motivations and I quite agree.
> This depends entirely on (a) whether the intelligence is designed
> "top down" or a constrained "bottom up" fashion; or (b) an unconstrained
> "bottom up" fashion. It is (b) that you have to worry about.
> (b) goes to a chat room and convinces an unsuspecting human to
> download a program that executes on a nonsecure computer that
> breaks into the secure network enveloping (b) bypassing security
> protocols allowing (b) to remove the locks preventing self-modifications
> ultimately detrimental to the human race. Bad, very very bad.
But a learning system will, by it's very nature, behave in a bottom-up fashion - the fact that an intelligence is constantly integrating new data and adding new faculties to it's repertoire means behavior emerges, rather than being determined from it's original program in a top-down sense.
And you can't prevent self modification without crippling the machine. In the learning case, it's like not allowing a child to read or attend school out of fear of the thought's it'll put into his head. Self modification is a constant process - we are always adjusting our world model. Hard coding human veto power would be like requiring that you ask my permission to remember every time you come to a conclusion that might influence your behavior in future.
>> Perhaps we
>> guide the development of an Ai's value system in the same way.
> I think you have to go further than that and build in at a fundamental
> level that self-modifications require the approval of sentient beings.
> Attempting to self-modify goals/constraints without external "agreement"
> should involve the equivalent of putting your hand in fire and keeping
> it there.
But don't you think you could hold your hand in a fire if the stakes were high enough?
> Anything else would be like continuing on the current status quo for
> millions/billions of years and treating an alternate intelligent
> species evolved by nature with the attitude of "isn't that nice",
> not realizing of course that it was more intelligent that we
> are and as soon as its numbers became great enough we would simply
> be "cold meat".
Well - whatever. What's so great about survival anyway, and why do we get so sentimental about our own species? Not to sound too much like a black-sweater nihilist, but, seriously, why does it matter?