From: Lee Corbin (lcorbin@tsoft.com)
Date: Wed Jul 02 2003 - 23:51:37 MDT
Emlyn wrote
> Lee wrote:
> > > (much deduction, investigation, leading to negation of
> > > concept of conscious self; self is an illusion, "I"
> > > am just a pattern of information)
> >
> > Well, it depends (of course) on what we ought to
> > properly believe the "self" to be. I think that
> > there exists a proper notion of self completely
> > compatible with materialism.
>
> Care to elucidate? I don't think I know what it is. It certainly isn't "the
> will to survive"; that's just a partial specification for a decision making
> algorithm. It's also not an algorithm modelling the universe including
> itself, including the information that the self model represents it; that's
> just data and process again.
Right, the self is not "the will to survive", but I *would*
say that it is a sort of algorithm modeling the universe
including itself. But then I have an overly broad view of
what *algorithms* encompass 8^D
Indeed, I have never thought to try to define "self". But
it seems to me that it is a species of *person*, which I have
tried to define. Yes, persons are algorithms or programs,
but ones with very special features. Some kinds of programs
not only map the external rooms in which they are housed, but
often (in the case of animals) only do so from the perspective
of the center, that is, they see the world only through their
own eyes. Yet this is sufficient for personhood, and for a
self.
So we begin to enumerate the characteristics that a program
(ideally housed in a robot body) ought to have to qualify as
a person (i.e., to have a self). Now for each of the following,
I expect that there are exceptions, namely, pathological cases
in which a criterion is voided:
1. the program has a belief in a boundary that separates
itself from "other"
2. the program must have a bias in favor of interpreting
all knowledge in terms of how it itself is affected
3. it must strive for consistency in its maps; it will not
do for it
(a) to fall into repetitive modes of actions each undoing
the last
(b) given a stimulus, to apply at random two entirely
different behavior routines
4. it must act (at least at the lower levels) to do what is
necessary or traditional for it to survive (e.g., metabolism)
and of course this list is not complete. And I repeat, any
one of these conditions can probably be found unnecessary in
some example; e.g., for the last, a robot might be constructed
out of some indestructible materials and placed in an environment
where it could not possibly come to harm.
Thus trees do not have selves, but most animals do.
> How can experience arise from that??
I hope that my answers are half as good as your questions! 8^D
It is necessary to proceed in the 3rd person, because 1st
person accounts fail. Therefore, we are asking why the
organism or robot has experiences, and we keep our point
of view OUTSIDE the subject we are investigating.
The entity acts on its programming and its sensory input---
that much is clear. Evolution (or Nature, or the next generation
programmers) has also seen fit to equip the entity with storage
mechanisms for currently received information and two kinds of
processing that attend this "memory". One kind funnels the
incoming sensory data with recent memories and current lusts
or drives, into what we should call a state of mind. The second
kind nearly instantaneously matches this state against the entire
memory store by an incredibly efficient algorithm, and funnels any
hits that result, into further processing resulting in actions,
or (in advanced entities) plans for actions.
To us, looking down upon the entity or animal, this processing
is called "experience", and we say that the entity is undergoing
one.
Now all this makes perfect sense, I would think, and hardly comes
as any sort of surprise or revelation---it seems quite clear why
entities would have arisen in nature having these features. The
only problem comes (perhaps) when the old issue is raised about
"what it is like to *be* one of these entities"; and we end up
trying to delve into 1st person accounts.
> Why are they aware? What function does it serve? Algorithms and qualia don't
> seem to mix well (if qualia correctly represents what I am trying to
> describe, which I suspect it doesn't).
Their awareness (continuing the 3rd person account) serves
to insure their survivability in a Darwinian universe. I
have never found any discussion of qualia to advance understanding
an iota (but see below). Keeping the 3rd person perspective,
one can see what is happening in the monkey brain of the subject,
and we can (in principle though not yet in exact detail) account
for its cries, its fears, its cunning, and its hopes.
> > I am finding that emphasizing the difference between
> > Tegmark's "frog perspective" and "bird perspective"
> > helps clarity an old dichotomy.
>
> I don't know this example.
Tegmark tackles Everything (as in Theory Of) in his cosmological
paper http://it.arXiv.org/abs/astro-ph/0302131 . He discusses
(briefly buried way in there) whether and when to use the 1st
person frog perspective or the 3rd person bird perspective. An
unabashed platonist, Tegmark says that if you want to have the big
picture, want to know the truth, then go for the bird perspective.
As for me, when I look down at the monkey on the laboratory table,
I can see clearly all the limitations that his brain has for
understanding the world about him, and even where his grasp of
the world is good, how biased and prejudicial are its internal
renderings.
So, then, stepping back a moment, I can readily understand that
I myself must have similar shortcomings, and suffer (not too
strong a word) from a limited perspective. So just as we can
only infer by analogy what four dimensions are "like", so can
we only IMO keep looking at that monkey and repeating, "my
own experience must be something like that".
Or I can imagine being strapped down in a large circular operating
room. On the beds next to me are other people, very much like me.
Overhead is a giant view screen that shows in incredible 23rd
century detail the internal workings of the brains of the subjects
(one of which I am) in the room.
I note the essential similarity of everyone's brains, and even the
amazing degree to which the language regions are remarkably the
same (they all speak the King's English, it turns out). So of all
these nearly identical depictions, one is mine.
Now the experiment begins. A needle is jabbed into the finger of
the first person and we all hear a yelp of pain. We note on the
overhead monitor how one of the brains has just undergone a well-
known reaction. Then another subject is poked, and we see the
identical transformation on another screen. Once again, neural
conduction from the finger to the arm to the subcortical and
finally the cortical regions is clearly shown. There is also
attending surprise, anger, and escape mechanisms voluntary and
involuntary. The same glands, etc.
Suddenly my vision turns red with anger and *I* have an overwhelming
sensation of pain; yet another part of my mind verifies (or is able
to a couple of seconds later) that brain #201 has behaved exactly
as all the others. Yet, to me, there was obviously something very
special about #201.
What else can be said? What else needs to be said? I don't see
anything the slightest bit mysterious here, once I have done two
things:
1. FULLY and TOTALLY and UNIMPEACHABLY have renounced any form
of nonmaterialism, and accept that the physics is all that
there is
2. FULLY and TOTALLY and UNIMPEACHABLY have understood and
accepted that I can never escape being a particular one
of the entities.
Lee
This archive was generated by hypermail 2.1.5 : Thu Jul 03 2003 - 00:00:40 MDT