At 05:45 PM 08/07/2001 -0400, Mike wrote:
>...A scanned mind would, with its much more
>logical circuitry of its new substrate, be able to more accurately
>calculate the risks and assign trust ratings based on observed behavior,
>which are generally far less and far higher, respectively, when analysed
Ouch! I gotta argue with that.
A scanned mind should work just like the original. The substrate should
make no difference to its actual operation. That is the whole idea, of
course, to copying yourself into a machine in the first place. The copying
process would take any tolerant attitudes or psychoses with it.
Once your mind is in there you can indulge in a little self betterment,
tweaking your numeracy skills, improving your memory, enlarging the pattern
recognition area in what was your visual cortex... but that happens later,
and only if desired.
> Automatic assumption of Other as Enemy is a sign of a
>diseased mind, which if not healed by the process of upload, would
>prevent upload due to system conflicts.
I doubt that screwy belief systems can ever be guarded against in the
upload process. Their brain structure almost certainly "looks" the same as
those that support rational thought patterns. In fact putting such
safeguards in place would themselves probably be a very dangerous thing,
able to be too easily perverted. (No gay guys or lesbians can upload as
they are obviously sick, as are socialists, and blacks and Jews are
subhuman so are not worth the space. Ugh!! You can see how this could go.)
But you shouldn't need to guard against people wanting to upload anyway.
Once there, they have an infinity of infinite virtual universes available
to them. There is little possibility of clashes for resources as the
virtual worlds would enable them to live quite literally as gods. Most
won't need or want to know about the outside world.
Back in the "real" world: does anybody seriously expect that the human race
that instantly puts down any dog that dares to bite a human will ever
tolerate any machine intelligence that even raises its voice to humans?
There is no way that AIs will ever be allowed to become a threat to humans.
They will supersede humanity quietly, in spite of their best efforts to
help us. We need never worry about force. We are much more likely to be
pampered to oblivion.
Q. What is the similarity between an elephant and a grape?
A. They are both purple... except for the elephant.
Virtual Reality Association http://www.vr.org.au
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:42 MDT