Re: Mind Survival Strategies (was "Mind machines, a badly ..")

The Baileys (
Tue, 03 Nov 1998 15:40:44 -0500

Scott Badger writes:

>Are you conceptualizing this as though the two entities are
>communicating with each other? If so, the acceptability of
>the newer entity by the older should be fairly clear by judgement
>time. If the newer entity, quite pleased with it's own development
>despite the criticisms of the older entity, suspected that it was
>about to be overwritten, then wouldn't it make a secure copy of
>itself? No, you're scenario would require that only the older entity
>could have the ability to overwrite or make copies. Yes? Otherwise,
>info storage capacity problems are inevitable.

Your point is well taken. I've implied as such in my writing but didn't specifically state it. The new identity would be afforded only provisional rights. Instigating its own gatekeeper AI would not be among these provisional rights.

I've anticipated another problem with this one tier judgement approach. The new me could desire to make some drastic change to its mental configuration. However, knowing that it will be judged by the backup me and believing the backup me might reject the drastic change, the new me does not make the change. The backup me, ignorant of the intentions of the new me, judges the new me to be acceptable and overwriting occurs. The new me then instigates the change (and is always assured of being able to do so since it is the new point of reference). I've been fooled by myself! Perhaps there will be a way for the backup me to know the intentions of the new me. However, if the new me comes up with the drastic change post-overwriting then the entire setup up is compromised.

The solutions to this problem would be to have a periodic judgement by the original (t=0) identity. For every period t, the t0 identity would judge the resulting identity of the past t period's experimentation and changes. This identity can be archived as usual. If t0 decides that t1 is acceptable then t1 is allowed to continue onwards. At t=2, t2 would be reviewed by t0. If t0 decides t2 is unacceptable t0 can begin with t0 or t1. To this extent, all my identity changes would be vulnerable to my original identity. The biggest danger here is a lack of understanding on the part of t0. t0 might not understand the positive aspects of a t(n) configuration and thus t0 could place an absolute limitation on the development of my identity over time.

When I have time, I'll try and develop this whole idea more completely (and more coherently!).

Doug Bailey