Re: Humanrintelligences' motivation (Was: Superintelligences' motivation)

Anders Sandberg (
Thu, 30 Jan 1997 14:40:55 +0100 (MET)

On Thu, 30 Jan 1997, Max M wrote:

> We don't need advanced AI or IA but just plain simple exponential growth to
> give extremists and minorites acces to weaons of massive destruction.
> Somehow we need to change the goals and motivations of these dangerous
> minorities.

This is a big problem, but changing goals and motivations of people is
very hard, most likely much harder than nanotech or AI. Convincing *many*
people that causing mass destruction is bad is quite possible and has
obviously worked, the trouble is that those who would cause it usually
have memes that shield them from "foreign propaganda" or "satanic lies".

> But it only takes one madman
> with the recipe for Gray Goo to destroy the world.
> Currently there are enough of them to go around. :-(

Exactly. And the problem will get worse as technology advances. Since the
power of destructive devices is increasing, while the price of any
technology once developed tends to fall, and the percentage of madmen
seems to remain finite the expectation value of madmen-induced disasters
will increase. In short: sooner or later even local loonies can play with
gengineering anthrax bacteria.

The problem is that you cannot get rid of the "loonies", we can just
decrease the number of them by education, brainwashing, niceness nanites
in the drinking water and whatnot, not bring it down to zero - that would
imply that everyone was rational, tolerant and devoted to the continued
existence of homo sapiens and its descendant species. Let's not forget
incompetence ("Oops! I forgot to add a halt instruction to the
replicator!") and different values ("Everyone will thank us for removing
the revenge instinct").

So, what other possibilities are there? You can of course try to halt the
growth of technology, but that will most likely cause very dangerous other
effects. You can try to prevent the development of nasty inventions, but
that is hard to predict beforehand (who could tell in the ninteenth
century what the investigations of the fluoroscence of some heavy minerals
would lead to?). You can try to monitor everyone and everything to
prevent loonies from doing something nasty, but that will not always work
and you will end up with "Who watches the watchmen?". Another possibility
would be an automated "immune system" against nanotech, but even if it
would work well, there are always other dangers.

Most likely the only stable way of preventing widespread disaster is

> With a future where there's a risk of a techno elite to hold the power,
> there's a big chance of unsatisfied masses instead of minorities, with an
> abundance of unsatisfied "loosers" willing to press the button in the hope
> of some kind of chance.

True. Spreading education and optimism is a good idea for many reasons,
not just this one.

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y