Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Sep 25 2000 - 22:46:42 MDT


"J. R. Molloy" wrote:
>
> Samantha Atkins writes,
>
> > I do get this. And yet it still seems that even if we can't convince
> > them its coming or get them to really understand what that means (though
> > who knows?) if they are convinced, that we owe it to them to do our best
> > to make sure they aren't run over and that they are actually benefitted
> > (as a minimum).
>
> "we owe it to them"? And what do they owe us? What have they done (lately) to
> promote and extend the cognitive sciences? What have they done to advance and
> enlarge the store of human knowledge? Never mind... Forget them... let's ask
> ourselves what *we've* done to bring the world closer to TS.

I would point out that some of those people raised us, fed us, educated
us, on and on it goes. How many generations of people, scientists,
mundanes, and so on do you think this civilization and this level of
science rest upon? Do you think all of those human lives and human
dreams whether they are like yours or as scientific or not are invalid
and pointless as long as you get what you want? Unless the TS answers
and fulfills all those dreams (or better) it is a rip-off.

>
> > Well, the extremes are both unlikely. I would suspect it is somewhere
> > in between, it will make its own mistakes. But I find it very unlikely
> > that the masses will not be manipulated against those "selfish,
> > egotistical scientists" who let this thing loose in their midst.
>
> Who will do the manipulating? ...and how will they manage to manipulate "the
> masses" more effectively than the AI can do so?
>

Do you think the AI will spring full grown out of the mind of Eliezer
(god help us)? Do you think it will automagically know how to win
friends and influence people? Do you think there is zero time between
serious work starting on the AI and it being together enough to fix
everything? Personally I think there is non-zero time when we "mere"
humans will need to do our best to keep this species in more or less one
piece. And I think that doing that is every bit as important as
building the Singularity. I would not be surprised if doing that is
crucial to the success of the entire project.
 
> > Sure. But what of the humans who will be even more largely out of work
> > and feel/be even more redundant? How will you organize society so that
> > these folks get taken care of so that they don't see this as a
> > tremendous threat and possibly the end of their own means of survival?
>
> It's called capitalism. Buggy whip makers had to adjust to the invention of the
> automobile, and tomorrow's workers (especially knowledge workers) will need to
> adjust to the invention of AI.
>

We are not talking about any such civilized rate of change at all.
There is nowhere left to go for increasing levels of workers (largely
following the IQ Bell Curve). If you think it can't happen to me or to
you then I don't think you are paying attention. Capitalism and waving
the hands is not an answer.
 
> > If that is and continues to be *all* that makes the world go 'round then
> > we all end up on the trash-heap of history in the very short run. A
> > great motivation to do the R&D isn't it?
>
> Not the way I see it. Since money is what makes the world go 'round, best get to
> making plenty of it. The very best motivation to do the R&D that will enable you
> to make the most of it. (Max More seems to have grok'd this long ago.)
>

And the day after full nanotech or transhuman AI almost all of your
money and your holdings are worthless. I think you are missing the
pressing need to have some idea of how to reform society and economics
before we get to that day and even at various quite mundane points along
the way. Please tell me if I am wrong and convince me if you can. I
would very much like to believe the world is as simple as some seem wont
to see it.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:06 MDT