Re: Why would AI want to be friendly? (Was: Congratulations to Eli, Brian ...)

From: xgl (xli03@emory.edu)
Date: Tue Sep 05 2000 - 07:30:29 MDT


On Tue, 5 Sep 2000, Zero Powers wrote:

>
> So, I guess I have no fear of the AI being malignant like the “Blight” in
> Vinge’s _Fire Upon the Deep_, but I can’t see how it is that we expect it
> to give a hoot about our puny, little problems, or even to be “friendly”
> to us.
>

        i see no reason that an SI (the kind that eliezer envisions,
anyway) would experience anything remotely as anthropomorphic as
gratefulness. we are talking about an engineered transcendent mind, not
a product of millions of years of evolution -- no parents, no siblings, no
competition, no breast-feeding.

        as eliezer points out in his various writings, if such a mind does
anything at all, it would be because it was objectively _right_ -- not
because it feels good, and not as a result of any coerced behavior (ie,
evolutionary programming). thus, even if the SI is the best alternative
for the human race, i would still approach it with fear and trembling.

-x



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:11 MDT