Robert J. Bradbury <firstname.lastname@example.org>
> (I agree that Brent is rapidly skipping from Robin's "simple"
> uploads to full sensory interleaving without a huge amount of work
> on exactly how this would be accomplished. IMO, that is the
> *really* hard part -- mapping individual neural nets onto a common
> experience message exchange format.)
This is intentional. Sure, complete consciousness sharing is
the *really* hard part, but this isn't what I'm talking about. I'm
talking about something much simpler than this. I think our visual
awareness system is much simpler than all that. I think once we have
the tools to understand how we are visually aware of things, then
we'll be able to make more progress at understanding how we know what
our thoughts are made of.
Our primary visual cortex is what holds our conscious
knowledge of what we wee. It's what produced the color qulia which
our visual awareness is made of. Once we discover how and why our
visual cortex produces about 100 degrees of visual awareness centered
around the direction our eyes are pointing, it can't be that hard to
augment that system and to increase it to say 200 degrees of visual
awareness of what is around us with the input from additional
artificial visual eyes, or wider field of vision replacement eyes and
augmented optic nerves...
It can't be that much more difficult, after things like that,
to engineer a more independent visual cortex that is integrated into
the same conscious visual awareness space, that works based on the
input from an entirely different set of eyes, like say those of your
spouse, when she desires to give you access to the data her eyes are
Sure, this is going to take lots of advancement and
understanding before we can do any of this. But how can we not be
able to do any of this if we can do "simple" uploads, at least
conscious ones that use the same qualia the original used to represent
its conscious knowledge.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:34:36 MDT