Re: Darwinian Extropy

Anders Sandberg (
Tue, 10 Sep 1996 18:47:19 +0200 (MET DST)

On Mon, 9 Sep 1996, Robin Hanson wrote:

> There seeem to be two different scenarios here.
> 1. The intelligence in a star system may at some point be no longer
> interested in increasing computational ability.
> 2. The other is that discount rates may colonization not
> cost-effective.
> On 1. The effective net values of a star system will be composed of
> the values of its many components. Even if most of these components
> suddenly no longer desire computation ability, some components surely
> will. And some components will surely value exploration and
> colonization for its own sake. Even within a group the size of this
> list we have a wide variety of goals regarding our futures.

Of course, this assumes non-homogenous systems, but I think we can safely
assume it. Communication lags will make a distributed mind very slow unless
its component parts have much autonomy. Most likely we will get a situation
to a human brain: a possible system metamind, rather slow but setting the
global goals, consisting of many smaller minds (which could be Jupiter
brains) which in turn consists of smaller and smaller minds.

It might not be certain that if the metamind decides it has grown enough
the subminds will agree (even if we consciously understand that we have
eaten enough our hypothalamic nuclei doesn't necessarily agree). But
spreading to another system would be more equivalent to siring a child;
the delays would make it a very separate mind. One possibility is of
course a galaxy of system minds joined into a super-slow metamind, but
this would take a very long time to realize.

I don't see why subminds wouldn't set out on their own to grow into
system minds.

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y