Re: Paradox--was Re: Active shields, was Re: Criticism depth, was Re: Homework, Nuke, etc..

From: xgl (xli03@emory.edu)
Date: Sat Jan 13 2001 - 13:50:37 MST


        okay, i'm going to try my own spin on this -- feel free to slap me
out of it if i get carried away.

        this is kind of reminiscent of proof by induction.

        * given two mental states, the more friendly state is more likely
          than the less friendly state to be followed by a friendly state
          -- i.e. the rules of mental state generation is such that a
          friendlier instantanious state is more likely to lead to a
          friendlier trajectory of states
        * if at any instant, an ai is more friendly than a human being,
          then the ai is also more likely to be friendly in the future
          than the human being -- i.e. we should trust the ai more than
          the human.

        so the argument would take the following route. first, show that
the rules of ai mental state generation satisfies the first point above.
then show that human mental state generation also satifies the first point
above (if not, the argument ends here). now show that the ai friendliness
function grows at least as fast as the human friendliness function.
finally, demonstrate an ai friendlier than a human. then we don't need to
wait forever to find out that the ai is really friendlier than the human;
we can just use induction to extrapolate the friendliness to infinity.

-x



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT