Re: Subjective counterfactuals

Darin Sunley (umsunley@cc.umanitoba.ca)
Mon, 06 Apr 1998 02:34:55 -0500

Dan Fabulich wrote:

> no ordinary playback will pass any reasonable Turing test.

It seems to me that thsi whole debate becomes a littel cleaarer when stated in terms of the ontological levels of the agents involved.

An example of an ontological level is Searle's old saw about a simulation of a thunderstorm making the inside of the computer wet. I wouldn't expect a simulation of a thunderstorm, or playback thereof, to make the inside of my computer wet. But I WOULD expect it to make any virtual cows in the same simulation virtually wet. The virtual cows and the virtual thunderstorm are on the same ontological level. The virtual cows and thunderstorm are all one level DOWN from the level of the hardware running the simulation, and of the programmer who write the simulation.

Isn't the whole idea of a Turing test that it be done between agents on the same ontological level? When we program a computer to attempt a Turing test we are equipping it with sensors and knowledge about our ontological level, and the ability to communicate to our ontological level. We are, in short, attempting to keep the computer's processing in the same ontological level as the hardware, instead of doing processing in a domain one ontological level down.

>From the view of agents within simulations we create as being one
ontological level down, we can imagine the agents that are one ontological level UP fomr us. Some theologians call them Gods.

The framework of ontological levels gives some insights to classical theological arguments. Aquinas's arguments from First Cause can be restated to say "Within a given ontological level, there is a first effect, caused by an agent in a higher ontological level." Anselm's argument can be characterized as "There is a highest ontological level." Aquinas and Anselm characterized these statements as proves, but in fact they are statements, which, if both proven, would serve as a prove for the existance of God.

Returning to the point of this thread, the framework of ontological levels sheds some light on the problem of the computability of consciousness, especially as regards the consciousness of playbacks of conscious agents.

Consciousness is a label assigned by one agent to another. Postulate an ontological level containing two agents, each of whom believe the other to be conscious. Let them make a recording of their interactions, to the greatest level of detail their environment allows, to their analog of the Heisenberg limit. Let one of them program a simulation, containing two other agents. Neither of the first two agents believes that either of the agents in the simulation is conscious.

>From OUR point of view, one ontological level up form these agents,
neither seems conscious. Both are, from our point of view, completely deterministic, and we see no menanigful distinction between them and their recording.

>From THEIR point of view however, the recording seems dramatically less
conscious then they are. Both agents in the original level would pass Turing tests adminstered by the other. Neither of the agents in the recording would pass a Turing test adminitstered by agents from the original level.

Playbacks are all one ontological level down from the original matieriel, from the point of view of agents within the original matieriel. From the point of view of agents one ontological level UP form the original matieriel, a playback and the original would be on the same level. Similarly, could a being on the original meaningfully distinguish an agent one level up from a recording of an agent one level up?

I may be running in a simulation right now. The beings that created the simulation are one level above me. I may create a simulation. The beings within that simulation are one level below me. From my point of view, a "playback" of a human being is at the same ontological level as a simulation I write.

Tangent: How about an "Inverse Turing Test"? Where we try to write a computer program that can distinguish humans from computers? :)

Darin Sunley
umsunley@cc.umanitoba.ca