Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 28 2000 - 00:40:40 MDT


Damien Broderick wrote:
>
> At 10:53 AM 27/09/00 -0400, Eliezer wrote:
>
> >Phylum radiation is
> >cognitively plausible only if the SI possesses an explicit drive for
> >reproduction and diversification at the expense of its own welfare.
>
> This seems a rather odd thing to read from a fan of mutation-with-selection
> accounts of lebenforms.

I'm a fan of *what*? Either you have me confused with 'gene or this is
Damienspeak for "evolutionary psychology".

> Presumably there are a couple of suppressed
> premises here: that SIs would never choose to copy themselves, or if they
> did choose to do so they'd have absolutely perfect reliable error-checking,
> forever.

Both of these sound much more plausible to me than the alternatives. Even if
distinct nodes need to run separate decision-making mechanisms, the use of
identical algorithms can ensure that there'll never be a major conflict.
Infinitesimal conflicts about the third decimal place can be resolved by
compromise or flipping a coin, as opposed to general warfare between the nodes
of a single mind. That's the way I'd set things up. Even if a
superintelligence needs multiple components due to lightspeed limitations, the
result isn't a society, it's a multicellular organism. (But without the
possibility of cancer.)

Perfectly reliable error-checking doesn't look difficult, if you're (a)
superintelligent and (b) willing to expend the computing power. And imperfect
error-checking (or divergent world-models) aren't a disaster, or even a
departure from the multicellular metaphor, as long as you anticipate the
possibility of conflicts in advance and design a resolution mechanism for any
conflicts in the third decimal place that do show up.

Just because we live in a world of reproduction and imperfect error-checking
says nothing whatsoever about how things work in the realm of design. I think
the Foresight Guidelines would have a few words to say about the assumption
that, say, nanomachines need to reproduce, or that they can't have absolutely
perfect reliable error-detection.

Assuming that *unintentional* phylum radiation takes place requires assuming a
stupid SI - what we can see, an SI can see and prevent. Assuming intentional
phylum radiation requires the intention. I stand by my statement.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:17 MDT