"J. R. Molloy" wrote:
>
> Harvey Newstrom wrote,
> > Yes, some of the arguments boil down to definitions and word
> games.
> > It avoids the questions, but doesn't address the real question.
> If
> > there is no consciousness, then I merely need to find a better
> word
> > for whatever it is I am trying to save. The question and goal
> > remains, even if we keep changing the words.
>
> Words mean things. We need to throw out words that refer to false
> notions such as phlogiston, vitalism, and consciousness. It's not
> just a question of semantic convenience. If a word is wrong, get rid
> of it. What you save via uploading is the information stored in your
> brain and body.
I admit I'm a bid superstitious on this topic. John and I have gone back
and forth on this for years. John's concept of the destructive scan
producing a copy that is indistinguishable from the original avoids what
happens when the original is not, in fact, destroyed during a scan, so
while the copy may be as good as the 'real thing', there is something
basically counter-intuitive to the idea that my uploaded copy is
indistinguishable from ME, or is me for all practical purposes.
The way I get around the squeamishness of blithely zapping the original,
or having to deal with leftovers is to embrace the 'soft upload' as the
only true 'transfer of conciousness', where the individual augments
their mind with artificial components to such a degree that eventually
the biological component dies but is not missed by the concious entity.
It is tantamount to taking a '57 Chevy and slowly replacing parts: the
alternator, the points, the carb, the muffler, the head, etc. At some
point in the future, there will be no original parts on it, but its
history, its provenance, makes it a superior vehicle to one that has
been reproduced by some CAD/CAM system from scans of original parts.
With a body infested with nanites whose job is the slow replacement of
dying neurons with artificial substitutes is analogous to this.
I can much easier accept that the resulting entity eventually living
totally on an artificial substrate is a 'human' that is aware than a
mere scanned upload.
Do I think they are inferior to soft uploads? Not necessarily. How would
they be any different from an AI grown from scratch on artificial
substrates? Likely little except the AI would likely have far less junk
code involved. I am just not sure if following that route would retain
the ME-ness of ME.
This is likely a significant solipsistic conceptual stumbling block, one
that is entirely 'in my head'. I don't deny it. This is likely true.
However, anything that would make me comfortable going through such a
transition ought to be attempted, if only to make such a procedure more
acceptable to the population at large, and to make the resulting
entities more likely to be treated as 'human' by the rest of humanity.
This would likely help reduce greatly the amount of support for luddites
who would oppose granting civil rights to such entities, which can only
be to the good.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:35 MDT