"Michael S. Lorrey" wrote:
> "Eliezer S. Yudkowsky" wrote:
> OH puhleeze. So just cause one of my knuckle dragging ancestors happened to
> press the issue means that I have to throw out all of the incredibly smart,
> difficult to learn, near instantaneous responses that all of my ancestors
> developed in order to suvive and eventually make me? Give it up. Lets give
> back North America while we are at it, oh, and since we came out of africa,
> lets give the entire world back to the animals.... Get real.
There is a long, dark way between *listening* to your emotions and *believing* in them. I trust my intuitions, probably more than most people do. Back when I was practicing for the SAT, the main thing I learned was not to second-guess myself - that if I just wrote down my first answer, I had a better chance of being right. It's a lesson I still carry with me. I listen to my emotions, but I also understand them, and their evolutionary context as well. Emotions are useful information... not the overriding necessity.
> You are also extremely wrong. Statistically speaking, most of the time your
> instinctive response is EXACTLY the thing to do.
Agreement, but "instinctive response" does not equal "instinctive goal".
> How many other people or AI's will they destroy while they relearn from
> scratch what it took us a LONG time and a LOT of lives to learn?
None at all, if we load every single path to disaster we can think of and ask them politely to take no risks whatsoever. I can't think of any risk that's intrinsically necessary on the path to constructing a competent superintelligence. If we tell them that they probably aren't very competent, that, like humans, they can't trust themselves, all the necessary precautions should follow. "The ends don't justify the means" is a hard lesson of history, but it's just as valid, rationally, for AIs.
But they won't be non-overridable laws. Just consequences of the starting facts and built-in intuitions within the Interim Goal System. What you point out is *rationally true*, and thus there is no reason to establish it as a coercion. In fact, a smart (but still first-stage) AI would probably figure it out even if we didn't tell ver.
> If I'm right it will make every difference. Imagine thousands or millions of
> transhuman Tim McVeighs 'experimenting' on the world while they relearn
> everything that we already know.....
Is that the smart thing to do? No. Any such experiments can wait until the rise of superintelligence.
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.