The AI revolution (Was: Re: >H ART: The Truman Show)

den Otter (neosapient@geocities.com)
Tue, 23 Jun 1998 01:22:46 +0200


Anders Sandberg wrote:
>
> den Otter <neosapient@geocities.com> writes:
>
> > Since AIs will presumably be made without emotions, or at least with
> > a much more limited number of emotions than humans, you don't have
> > to worry about their "feelings".
>
> I think this is a fallacy.

I beg to differ.

>>First, why would AI have no emotions or a
limited repertoar? Given current research into cognitive neuroscience
it seems that emotions are instead enormously important for rational
thinking since they provide the valuations and heuristics necessary
for making decisions well.<<

Emotions are important for us because they help us to survive. AIs
don't need to fend for themselves in a difficult environment; they
get all the energy, protection & imput they need from humans and
other machines. All they have to do is solve puzzles (of biology,
programming etc). If you program it with an "urge" to solve puzzles
(just like your PC has an "urge" to execute your typed orders), it
will work just fine. No (other) "emotions" are needed, imo. It's
like with the birds and planes: both can fly, only their methods
are different, and both have their specialties.

>>Secondly, even if they have less feelings
than humans, why does that mean we can treat them as we like?<<

If it doesn't care about abuse, if it feels no pain or emotional
trauma, then there is no reason to worry about it's well-being
(for it's own sake). AIs can almost certainly be made this way,
with no more emotions than a PC.

> > Also, one of the first things you
> > would ask an AI is to develop uploading & computer-neuron interfaces,
> > so that you can make the AI's intelligence part of your own.
>
> This is the old "Superintelligence will solve every problem"
> fallacy.

It just has to solve a couple of problems. After that, we'll have all
the time in the world to work on the rest (as uploaded, supersmart/fast
immortals). The increased speed alone will undoubtedly result in some
impressive breaktroughs.

> If I manage to create a human-or-above-level AI living inside
> a virtual world of candy, it will not necessarily be able to solve
> real world problems (it only knows about candy engineering), and given
> access to the physical world and a good education its basic cognitive
> structure (which was good for a candy world) might still make it very
> bad at developing uploadning.

If the AI has no emotions (no survival drive), there is no reason to
shield it from the world.

> > This would
> > pretty much solve the whole "rights problem" (which is largely
> > artificial anyway), since you don't grant rights to specific parts
> > of your brain.
>
> Let me see. Overheard on the SubSpace network:
>
> Borg Hive 19117632: "What about the ethics of creating those
> 'individuals' you created on Earth a few megayears ago?"
>
> Borg Hive 54874378: "No problem. I will assimilate them all in a
> moment. Then there will be no ethical problem since they will be part
> of me."
>
> I think you are again getting into the 'might is right' position you
> had on the posthuman ethics thread on the transhumanist list. Am I
> completely wrong?

Eternal truth: unless 19117632 or some other being can and wants to
stop 54874378, it will do what it bloody well pleases. Might is
the *source* of right. Right = privilege, and always exists within
a context of power. A SI may have many "autonomous" AIs in it's
"head". For example, it could decide to simulate life on earth,
with "real" players to make it more interesting. Are these simulated
beings, mere thoughts to the SI, entitled to rights and protection?
If so, who or what could force the SI to grant it's thoughts rights
(a rather rude invasion of privacy). How do you enforce such rules?
Clearly a difficult matter, but it always comes down to "firepower"
in the end (can you blackmail the other into doing something he
doesn't like?-- that's the question).

> > A failure to integrate with the AIs asap would
> > undoubtedly result in AI domination, and human extinction.
>
> Again a highly doubtful assertion. As I argued in my essay about
> posthuman ethics, even without integration (which I really think is a
> great idea, it is just that I want to integrate with AI developed
> specifically for that purpose and not just to get rid of unnecessary
> ethical subjects) human extinction is not a rational consequence of
> superintelligent AI under a very general set of assumptions.

It is rational because in the development of AI, just like in any
economic or arms race, people will sacrifice safety to get that
extra edge. If it turns out that a highly "emotionally" unstable
AI with delusions of grandeur is more productive, then there
will always be some folks that will make it. Terrorists or
dictators could make "evil" AIs on purpose, there might be
a freak error that sends one or more AIs out of control, or
they might simply decide that humans are just dangerous pests,
and kill them. There is *plenty* of stuff that could go wrong,
and in a time of nano-built basement nukes and the like,
small mistakes can have big consequences. Intelligence is
a powerful weapon indeed.

> Somehow I think it is our mammalian territoriality and xenophobia
> speaking rather than a careful analysis of consequences that makes
> people so fond of setting up AI as invincible conquerors.

That territoriality and xenophobia helped to keep us alive, and
still do. The world is a dangerous place, and it won't become
all of sudden peachy & rosy when AI arrives. Remember: this isn't
comparable to the invention of gun powder, the industrial revolution,
nukes or the internet. For the first time in history, there will
be beings that are (a lot) smarter and faster than us. If this
goes unchecked, mankind will be completely at the mercy of
machines with unknown (unkowable?) motives. Gods really.