> Geoff Smith wrote:
> >-If superior computer power/neural nets appear, and I can integrate them
> >with my brain, I will.
>
> What if you can't?
What if I sign up for cryonics, then tomorrow a meteor falls on me and
annihilates every last neuron in my brain? To me, these are the same
question. The answer: I have lost the game of life... but at least I
tried. Hopefully, I'll get an 'A' for effort. Maybe Tipler is right
about that Omega Point-- that's what I'll be hoping when I look up at that
meteor.
> >
> >-I will do whatever I have to to stay competitive with transhuman and
> >other human-built and extra-terrestrial intelligent agents who intend on
> >making my functions obsolete(by their own personal evolution)
> >
> Or what if people acting in concert with autonomous computers can
> outcompete you with your completely servile ones?
Very excellent point... but the solution to this is to simply evolve
faster than my and everyone else's autonomous computers.
Maybe 'simply' is the wrong word.
> >I'm more intelligent than a computer right now (if you were really nasty,
> >you could argue this point), so why exactly will that change, if I adhere
> >diligently to the points above?
>
> Not saying it will, but it might, should either or both of my "what if"s
> come to pass. Also, the computers might take over at some point, or they
> might also see some advantage in cooperation.
I don't buy it.
I cannot see a highly superior population of autonomous computers caring
one bit about collaboration with human beings. This would be like us
colloborating with mosquitoes... why would we do it? They're
infinitely more stupid than me are, and all they do is suck our blood. I
prefer to swat them. Imagine if mosquitoes were our masters, how long
would that last?
geoff.