Re: Yudkowsky's AI (again)

Dan Fabulich (daniel.fabulich@yale.edu)
Wed, 24 Mar 1999 18:54:51 -0500

At 09:09 AM 3/24/99 -0600, you wrote:
>I'm not going to go over this again, mostly because the old section on
>Interim Goal Systems is out of date. I'll just say that the IGS
>actually doesn't make any assumption at all about the observer-relevance
>or observer-irrelevance of goals. The AI simply assumes that there
>exists one option in a choice which is "most correct"; you may add "to
>the AI" if you wish. Even if it doesn't have any goals to start with,
>observer-relevant or otherwise, this assumption is enough information to
>make the choice.

It's enough for the AI to make the choice, but "most correct to the AI" is not the same as "most correct for me" if subjective meaning is true. And contrary to your earlier claims, I DO have goals with value if subjective value is true, and those values may indeed be contrary to the AIs values.

Again, it seems to me that if subjective meaning is true then I can and should oppose building a seed-AI like yours until I, myself, am some kind of a power. What's wrong with this argument? It seems like this argument, if true, annihilates your theory about the interim meaning of life.

-Dan

     -IF THE END DOESN'T JUSTIFY THE MEANS-
               -THEN WHAT DOES-