From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Wed Jul 02 2003 - 18:07:57 MDT
> -----Original Message-----
> From: Rafal Smigrodzki [mailto:rafal@smigrodzki.org]
> Sent: Thursday, 3 July 2003 10:20 AM
> To: extropians@extropy.org
> Subject: RE: Cryonics and uploading as leaps of faith?
>
>
> Emlyn wrote:
>
> > No idea. But it is crucial to my argument above that I
> agree that all
> > functions of intelligence seem to be fulfilled (or fulfillable at
> > least) by non-concious mundane algorithms, implemented in wetware or
> > other physical substrates. This is my quandry.
>
> ### But are you sure that the algorithms are not conscious?
> Have you asked
> them? (this is not mocking, just pointing out that our
> conviction about
> other people's being conscious also depends on analysis of behavior).
>
> I would think that a visual algorithm capable of assigning
> color to objects
> (i.e. analyzing the visual scene to assign putative
> reflectances to objects,
> not merely measuring the spectral characteristics of patches
> of the visual
> scene, like a spectrophotometer would) does have a subjective
> experience of
> color, indistinguishable from the experience of some parts of
> our occipital
> cortex (separate from the rest of the brain).
Is this part of the Strong AI conjecture? I think so. I guess if you allow
that consciousness is a continuum, indivisible from the algorithm that
conjures it, and based on its complexity, then you could propose some kind
of consciousness for any algorithm. I wonder how it feels to be a bubble
sort? Probably not like much at all; no self awareness.
That's something to think about while I'm banging out yet another business
app.
>
> ------------------------------
> >
> >>
> >> I rather think that "I" is a side effect of certain information
> >> flows, maybe even atemporal states of mathematical entities.
> >
> > I think Egan postulated something like this in Permutation
> City... we
> > are arrangements of information, and the permutations on the way you
> > interpret physical entities to be arranged are infinite, therefore
> > everything internally consistent exists somewhere as a complex
> > mapping from some piece of reality to the target pattern (although I
> > think the mapping would often require more information than the
> > target pattern embodies).
> >
> > But if you believe this, then identity really disappears in
> a puff of
> > logic. Why would we even bother with this reality if we always exist
> > in the greater platonic pattern space?
>
> ### Because "not bothering" results in unpleasant subjective
> experience.
> Also, what do you mean by "existence"? Minds which do not
> bother to exist,
> have a smaller measure in this platonic pattern space, which
> might for some
> other minds be a reason enough to bother.
>
> Rafal
Well, wait up there. An infinite space can't really be partitioned like
that. You either take up none of it, or a finite subset of it, or an
infinite subset of some order equal to or less than its order. Is it clear
that making the effort to exist, or to make sure that there are 10 copies of
yourself rather than 1 actually impacts in any way on the space of
possibilities in any meaningful way?
As to unpleasant consequences, that seems like a rather sad
punishment-avoidance motivation, unworthy of a rational being. Exist because
you are scared to not. Yuck.
Arguments about striving to survive because we evolved to (as brought up by
Brett), are explanatory, but not sufficient for justification in the light
of a philosophy (transhumanism) which rejects the constraints of naturally
selected mental structures. Wanting life because we evolved to want it isn't
really good enough. Is there a better reason?
Emlyn
This archive was generated by hypermail 2.1.5 : Wed Jul 02 2003 - 18:34:07 MDT