From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Thu Jul 24 2003 - 12:25:55 MDT
Robin wrote:
> On 7/23/2003, Rafal Smigrodzki wrote:
>> I remember reading about an experiment on the growth of a colony of
>> mice in a limited space. ... the mice became too crowded ... After a
>> few more months all of them died. ... Let's say that a reliable
>> modeling method predicts that the existence of more than N
>> independent volitional systems (minds with independent,
>> self-modifiable goals) within a volume of space inevitably results
>> in a destructive chain reaction of internecine warfare. In that
>> case, a coalition of minds might form, perhaps aided by techniques
>> for assuring transparency of motives, to keep the population below
>> N, perhaps a very low number, much less than the physical carrying
>> capacity of the substrate. If the coalition fails, all minds die
>> like stupid mice. ... I don't think this is the case, I tend to
>> think that expansion and proliferation will be decisive for
>> successful survival at superhuman levels of intelligence as well,
>> but one never knows.
>
> I agree your scenario is logically possible, but it would need quite
> a bit more detail to make it plausible. If it were true, I'd expect
> the creation of N large "borgs", which still use up most of the
> physical carrying capacity. And note that this scenario is one of
> straight evolutionary selection, and not really of "self-directed
> evolution".
### I think there would be an element of both straight evolutionary
selection (at the level of groups of beings either collectively succeeding
or failing at preventing death by fratricide), and an element of
self-directed evolution (at the level of individuals making decisions about
removing/suppressing parts of their volitional system and joining the
coalition).
To add detail - suppose that once nanotechnology develops a bit more (;-),
it would be possible for any individual to produce a doomsday weapon with
little effort, maybe a nucleus of ice-9. In that case, the probability of a
civilization with N individuals surviving for another day would be P=l exp
N, where l is the probability that the average individual does not make the
fateful choice. For sufficiently large N/l, the survival of the civilization
would be very unlikely. You can decrease the risk by increasing l
(transparency, choice of stable mental architecture), or by limiting N.
Today's proponents of technological relinquishment are advocates of
increasing l (by limiting technical abilities of individuals), but at large
enough N even a very high l offers little reassurance. This is very similar
to the considerations of nuclear MAD played between two superpowers vs. the
same game between a few dozens of smaller nations.
The success of the strategy of limiting N would be of course dependent on
the ability of the minds forming the coalition to deny the use of physical
resources to possible defectors. However, whether this denial would take the
form of incorporating the resources Borg-wise into themselves, or merely
monitoring them with the ability to detect and destroy disallowed use, is
hard to predict.
I agree, it *is* quite difficult to decide which future is plausible without
knowing the technical details.
Rafal
This archive was generated by hypermail 2.1.5 : Thu Jul 24 2003 - 09:32:40 MDT