Re: >H ART: The Truman Show

Anders Sandberg (asa@nada.kth.se)
23 Jun 1998 19:41:08 +0200


den Otter <neosapient@geocities.com> writes:

> The fact that our brains happen to work with certain emotions doesn't
> automatically mean that this is the *only* way to achieve intelligence.

I agree with this. The big question is what invariants are needed to
achieve intelligence, and if emotions are part of them, specific just
to intelligences in a certain kind of enviornment/evolutionary past or
just one possibility among many.

> Well, I *don't want* AIs to resemble humans with their complex emotion-
> based value systems. We need obedient servants, not competition. So if
> it turns out that you *can* have intelligence without a "will", then
> that should be used to make useful "genie-AIs". If this isn't possible,
> it might be better to make no AIs at all.

But even the humility of humble servants is an emotion. I understand
what you mean, and I agree that "genie-AIs" would be very useful
(humans could in fact provide them with the emotional-motivational
framework they might need). I'm not that convinced that enotional AI
on the other hand is bad; I do not think it is an unmanageable
problem.

> > > Also, one of the first things you
> > > would ask an AI is to develop uploading & computer-neuron interfaces,
> > > so that you can make the AI's intelligence part of your own. This would
> > > pretty much solve the whole "rights problem" (which is largely
> > > artificial anyway),
> >
> > What do you mean, the rights problem is "artificial"?
>
> It means it's not an "absolute" problem like escaping the earth's
> gravity, or breathing under water etc. It is only a problem
> because certain people *make* it a problem, just like there
> was no real "drug problem" before the war on drugs started.

Aha, so we transhumanists are creating the "death problem" by not just
accepting that people die young? That people die, destroy their lives
with drugs or demand rights are real objective facts. Since we (or at
least the people involved) do not like this, we try to do something
about it. This means we consider the current situation as a problem
and try to find a solution (which might be more or less good, of
course). All problems are to some extent artificial, or more properly
subjective to intelligent entities.

> Unless the AIs start demanding rights themselves,
> there is no reason to grant them any. If we're smart, we'll
> make the AIs so that they'll never feel the need to do this.

Which in itself is an interesting ethical problem. Suppose a certain
form of education created citizens that did not feel a need for
rights, would it be ethical for a dictator to use it?

> > > since you don't grant rights to specific parts
> > > of your brain. A failure to integrate with the AIs asap would
> > > undoubtedly result in AI domination, and human extinction.
> >
> > This seems to be an unjustified assumption. All other forms of life in the
> > world haven't died off with the coming of human beings.
>
> No, but many *have* died. Besides, although clearly inferior, humans
> would
> probably still be a (potential) threat to SI (after all, they have
> created it, or at least it's predecessors). Just like early man
> hunted many dumber, yet clearly dangerous predators to extinction,
> the SIs might decide to do the same. Just to be on the safe side.
> After all, they won't dependent on the humans in any way.

Actually, as I outlined in my essay they might very well be dependent
or linked to humans for simple economical reasons. The law of
comparative advantage clearly suggests that it would be more
beneficial for the SI to trade with the humans than not; both sides
can profit by specializing and depending on the other.

If humans are a potential threat, then the SI would also be less
motivated to wipe them out since the danger would be much larger. If
second-strike capability exists, then the SI would be hurt even if it
succeeds in wiping out the humans. And in order for the attack to be
rational, the benefit (safety) must outweigh the negative utility
(resources spent on attack, risk for losses or death, no more trading
opportunities). Simple game theory.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y