Re: IA vs. AI was: longevity vs singularity

den Otter (neosapient@geocities.com)
Mon, 2 Aug 1999 00:54:04 +0200



> From: Max More <max@maxmore.com>

> Why should SI's see turning humans into uploads as competition in any sense
> that harms them? It would just mean more persons with whom to have
> productive exchanges.

When a certain level of personal power is reached, the costs of competition start to outweigh the benefits. This I belief will be the case with true SI (see below).

> >Total control is even better. The SI wouldn't rest before it had
> >brought "everything" under its control, or die trying. Logical, don't
> >you think?
>
> This must be where we differ. No, I don't think total control is desirable
> or beneficial, even if it were me who had that total control. If true
> omnipotence were possible, maybe what you are saying would follow, but
> omnipotence is a fantasy to be reserved for religions. Even superpowerful
> and ultraintelligent beings should benefit from cooperation and exchange.

I find it extremely hard to imagine how something which can expand and modify its mind and body at will could ever need peers to cooperate with. If a SI can't entertain itself it isn't a real SI, and when it runs into some obstacle it can simply manufacture more computing modules, and/or experiment with new thought structures.

I think it's fair to assume that a SI would be essentially immortal, so there's no need to hurry. Even if there's such a thing as the end of the universe, it would still have billions of years to find a solution, which is ample time for even a human-level intelligence. Needless (or perhaps not) to say, a SI would never be "lonely" because a) it could and no doubt would drop our evolution-imposed urge for company, it having outlived its usefulness, and b) it could simply spawn another mind child, or otherwise fool around with its consciousness, taking as much (or little) risk as it wanted should it ever feel like it.

The above pushes the value of peers into the "0" zone, i.e. makes it neutral. But it doesn't stop there...

Other SIs could have completely different goals, goals which might include harm to, or even the destruction of, the original SI. Also their very existence would mean that there would be less resources for everyone (assuming resources wouldn't be unlimited), which could at some point seriously limit the SI's development.

Now the value of peers has become negative (say, -1). Rational entities always seek a positive value (1 -- let's call it "eternal bliss"), so obviously they'll try to limit the number of (potential) competititors.

The SI doesn't really have to be "omnipotent" to be fully autonomous; simply being "very powerful" (with features such as mentioned above) will suffice. Cooperation (in the form of societies, economies etc.) is by definition something for the weak and limited, like us humans for example.

> Despite my disagreement with your zero-sum assumptions (if I'm getting your
> views right--I only just starting reading this thread and you may simply be
> running with someone else's assumptions for the sake of the argument), I
> agree with this. While uploads and SI's may not have any inevitable desire
> to wipe us out, some might well want to, and I agree that it makes sense to
> deal with that from a position of strength.

Exactly, just to be on the safe side we should only start experimenting with strong AI after having reached a trans/posthuman status ourselves. If you're going to play God, better have His power. Even if I'm completely wrong about rational motivations, there could be a billion other reasons why a SI would want to harm humans.

> I'm not sure how much we can influence the relative pace of research into
> unfettered independent SIs vs. augmentation of human intelligence, but I

We won't know until we try. Nothing to lose, so why not? It's *definitely not* a waste of time, like Eliezer (who has a different agenda anyway) would like us to belief.

> too favor the latter. Unlike Hans Moravec and (if I've read him right,
> Eliezer), I have no interest in being superceded by something better. I
> want to *become* something better.

I saw an interview with Moravec the other day in some Discovery Channel program about (surprise, surprise) robots. He seemed to be, yet again, sincere in his belief that it's somehow right that AIs will replace us, that the future belongs to them and not to us. He apparently finds comfort in the idea that they'll remember us as their "parents", an idea shared by many AI researchers, afaik. Well, personally I couldn't care less about offspring, artificial or biological; I want to experience the future myself.