[snip]
>It would be meaningless to upload into a system that didn't duplicate the
>limbic system - why upload only *part* of the brain? Uploading will most
>likely encompass everything from the brainstem and upwards (and I'll
>admit that I have been thinking about the spinal cord too). The point of
>the Strong AI hypothesis is that duplicating emotion only is a matter of
>the right software structure, the hardware is irrelevant as long as it
>can run it.
>
I get you. Yes, the types of hardware used are irrelevant - it might
even be found expedient to use some sort of genetically engineered
brain-like substance! I just wanted to clarify (more for myself than
anyone else) that*some* implementation of R-complex and limbic
*functions* would be necessary for fully human experience as an
uploaded entity. And I see what you mean in that it would make an ideal
first base camp for AI, being relatively less complex than neocortical
functions, therefore easier to implement.
Ramble:- That sets me thinking: 'fully human'. Yes, I suppose one would
have to start off like that, to make it easier to 'get one's bearings',
so to speak. But later on, the fact that you could turn sub-routines on
and off at will: how would that affect your psychology? Take pain: it's
a warning sign. People who don't feel pain die an early death, it seems.
There would be a strong temptation for the uploaded to switch off pain
circuitry quite a lot of the time. Would that make you more of a risk
taker, or be likely to expose you to more risk?
e.g., would your self-monitoring then be like in a strategy game like
Command & Conquer, where all you are presented with is information on
the state of battle, and the condition of your forces, but you don't
'feel' anything?
(Interestingly, some games people are talking about having more
'feedback' in games- resistance from joysticks, mild shocks when
you're hit, etc.)
Hmm, interesting.
>
>
Guru George