Re: Ethics of being a Creator

Anders Sandberg (asa@nada.kth.se)
26 Apr 1998 21:24:58 +0200


Henri Kluytmans <hkl@stack.nl> writes:

> Anders Sandberg wrote:
> >Huh? This doesn't follow. I can set up a big game of life simulation
> >(or something similar, perhaps Tierra) right now on this computer, but
> >even if I discovered intelligent entities inside it after a while I
> >wouldn't know what to do with them. Sure, that huge stretch of digital
> >code is an intelligent entity, but I have no idea of how it works, how
> >to merge its memories (which might be impossible even in principle for
> >an Alzheimer-like case, where the memory substrate would be
> >non-isomorphic over time) or what it would consider "healthy" ("Poor
> >humans, guts riddled with bacteria. I'll resurrect them completely
> >without any bacteria, then they will be healthy and happy!").
>
> I think al this can be solved in principle. The question is only
> how much time it requires.

It is also important to compare this to what is available to the
creator. An infinite superbeing might not have any problems, but here
we are talking about more modest superbeings like what we might become
in the future. I think there is a very real possibility that the
simulation can become so complex that the creator will have a hard
time understanding it and dealing with the emergent moral dilemmas. It
might be fairly easy to set up a huge cellular automaton with a fixed
rule set, and extremely much harder to discern what is *really* going
on and how to deal with it. Remember that the fate of class IV CA
rules is Turing-uncomputable, which means we cannot predict it easier
than running the automaton itself.

> I expect it will not be that difficult for the creator to
> detect sentient lifeforms in simulations. For example it
> is also not so difficult for humans to detect selfreplicating
> systems and moving systems in the "game of life". Of course
> these current universes are much to small to create sentient
> or even intelligent beings.

We are good at recognizing moving systems because we have evolved in a
world where this is essential. Self replication is much harder, so far
it has required fairly careful study or very simple worlds (like the
parity automaton, where everything self-replicates). If the automaton
internally organizes itself as a 16-dimensional fourier component
space we will have a lot of trouble figuring out what is going on,
what is sentinent and what it is doing.

> >Even if I happen to be a posthuman jupiter brain, I will still be
> >limited in my knowledge about the behavior of complex systems such as
> >simulated universes.
>
> But you could learn...

Up to the limits set by complexity; even given an arbitrary number of
examples of class IV automates I would not become better at predicting
the behavior of a general class IV automaton (unless Penrose is right
and the CT hypothesis doesn't hold for some systems, but I don't
believe in that). Of course, if I just limit myself to a certain kind
of worlds I might become a good creator for that kind, but I have
greater aspirations than that.

> Systems in "game of life" worlds can alter their surroundings.
> When they do this in an indirect way, this could be considered
> using tools. I don't see why tools can't also exist in "game of life"
> worlds.

What is a tool, what is an object and what is a creature? Very hard to
distinguish in this case, and likely impossible in other cases.

> I didn't assume it should be very easy. But it shouldn't be to hard
> to communicate with other intelligent sentient beings. We will have
> enough in common. It should be just as hard as communicating with
> any other intelligent sentient lifeform.

I think you have a view of intelligence as fundamentally convergent -
all intelligent beings will have things in common. I doubt it -
intelligent evolved in a different world will have other ways of
perceiving it, motivation and planning, and this might make
communication nearly impossible.

[spoiler for Greg Egan's Diaspora!] As an example, Greg Egan invented
a form of intelligent life living in the 16-dimensional fourier
component space created by the growth of Wang-tile equivalent
polysaccharides forming large sheets in the ocean on an alien planet
(yes, they are my favorite aliens so far :-). What can we discuss? We
live in fundamentally different worlds (different even on the
ontological level!), have a different physics, interact with the world
in different ways (no light, just touch) and doesn't even share the
same kind of time (as a sheet splits, history splits). What can we
discuss with them? [Spolier end]

> >(like Tierrans, with a world consisting of computing
> >nodes and nonlocal memory indexed by templates and energy appearing if
> >you do certain things but not other things)?
>
> Interesting, where can I find more about these Tierrans?

Tom Ray has written an amazing alife system called Tierra, and is now
working on a net version of it. If there ever emerge intelligent life
in the Tierra-world we might call them Tierrans. There are likely
links to the project from the various alife sites on the net, it is
fairly famous.

> >I think it can be done, but it would be extremely hard to do, and
> >the process might be rather painful for the entities again (imagine
> >being resurrected all alone in a weird caricature of the real world
> >where *something* tries to communicate with you - and if you die,
> >you are immediately resurrected).
>
> The idea was they would be resurrected together with other
> individuals form their world. And why should the resurrection
> world be made a weird caricature of their real world. It could
> just as well be made as familiar as possible to their real world.

OK, I resurrect everybody into a world exactly like ours, with the
slight change that miracles occurs so that nobody dies or stays
dead. I think most people would regard that as a weird caricature. The
big problem is how to communicate - the creator might have to try out
all kinds of weird schemes ("What if I send serial messages? No, they
just eat them. Parallel messages? Oops, they died again. And they seem
to have killed my remote-manipulated Avatar...").

Apropos dealing with simulated worlds and the relationship
creator-creation, Egan's _Permutation City_ has some half-baked
ideas. This is really virgin ethical territory, with links to
computation theory, xenobiology and Conway knows what else. Fun!

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y