Re: Future Technologies of Death

Alexander 'Sasha' Chislenko (sasha1@netcom.com)
Sun, 28 Dec 1997 14:50:51 -0500


"Lee Daniel Crocker" <lcrocker@mercury.colossus.net > wrote:

> In the new phase of memetic evolution, where "we" are teleological
> threads of execution, represented by patterns of mass-energy, and
> acting upon resources of mass-energy, how can progress be achieved
> without risking the creation of errant threads and killing them
> off when necessary?

At 18:43 12/28/97 +0100, Anders Sandberg wrote:

>A good question. Evolution is based on three things: reproduction,
>variation and selection. It is the selection step that kills, at least
>so far. But if resources are unlimited, there is no need to kill off
>unfit units, they will be out-bred by their fitter relatives. So one
>way of having progress without killing anybody is to get unlimited
>resources to grow in. Too bad about Malthusian reality, even in a
>technosphere growing with lightspeed...
>
>However, the creation of new threads could of course be done with more
>finesse than the current "shake the dice" method - instead of relying
>on luck and some skill at upbringing we could use what we know to
>increase the likeliehood that the new thread will be fit and happy and
>make it less probable that it will be terminated for some
>reason. Quality over quantity. Of course, evolution and selection
>still occurs, but now we weed out a lot of unfit *potential*
>phenotypes instead of real phenotypes.
>
>Hmm, I feel quite at home with talking about child rearing (for this
>is what it really is) in terms of process forking, evolutionary
>strategies and selection. I wonder if this is rare? :-)

My favorite subject, the teleological threads.
I do not think weeding out potential phenotypes is much different from
getting rid of the real ones. In order to weed a potential process out,
you have to run a representative model of it, or test the beta/final version
on a small time/resource scale. The differences between a model, test run
and a "full" process are quite unclear, especially in the environment where
every process is a test/model of its future self and forks lots of parallel
children trying to resolve different and similar issues at the same time.
Sometimes, one of the children may have resolved the problem that
the whole batch of them was created (the task could be, for example,
improving of default communication protocols between intelligent nodes
in a knowledge network - i.e. development of a basic ethical system).
After the solution is found and tested, it can be implemented, and the
rest of the children can be suspended or killed in mid-execution.
They won't feel "hurt" , and neither will the environment - so what's the
problem?

Let's imagine an absolutely unlikely (IMO) world with infinite resources of
all kinds, where all potential things are generated at the same time, and
nothing is ever erased (I have a problem here, with entities trying to
exercise their freedom and get rid of their unwanted parts, with these
parts disagreeing. Could I shave off my moustache if it was intelligent
and arguing for its right to continue its existence in its natural habitat?
Or I could only create a clone of myself without the moustache, leaving
the moustached original enslaved to a tuft of intelligent hair. How intelligent
do we want the [sub-]entities to be to grant them such rights? )

Even in this situation of infinite resources though, all interesting and efficient
things/beings will be distinct from the ones created under "Rights of Continuing
Existence" or "Rights of Implementation" Acts. Those entities will be all
garbage, and every single one of them that has any intelligence will be aware
of it. They will represent practically all existing structures at any time, and may
occupy any point in the Universe, except that on its semantic map they will
never be on the frontier, always banished to the inner semantic junkyard/ the Slow
Zone, knowing that they are completely unneeded, their goals have no meaning,
and their wishes to attach themselves to anything else, though always granted,
create nothing but useless, ever-suffering cripples.

If this all starts sounding like a bizarre dream, this must be for a reason. I think
we should drop the remnants of anthropomorphism. We made a step in this
direction when we started discussing evolution instead of static identities, and
another step - with transition from structure threads to goal threads as the
subjects of progress. Now lets drop persistence and continuity - I mean as
things that *must* be maintained in the development process; they can still
exist here and there, just as structural threads and even static formations - but
only as temporary/specialty tools. Another concept that I think should be
abandoned in the discussion of future intelligent entities, is that there will
be clear borders between them. Instead, I expect the teleological threads
(or flows, as this term doesn't suggest linear succession) to employ and
share multiple remote sensors and actuators and distributed specialized
knowledge servers and communication vehicles on a time sharing/contract
basis. It may look quite weird, but it still seems plausible - unlike the
world of infinite resources where everything starts and nothing ends.

With that, I am ending the process of writing this message, and it's OK,
because it has fulfilled its intended goal. It can die now. Other processes
may later start their own typing threads partially based on this one.

---------------------------------------------------------------
Alexander Chislenko <http://www.lucifer.com/~sasha/home.html>
---------------------------------------------------------------