From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Thu Sep 04 2003 - 10:26:45 MDT
On Fri, 5 Sep 2003, Brett Paatsch wrote:
> [snip] But if folks will go out of their way to vote just to keep people
> they don't like out of power and if they are already concerned about
> jobs, how is it that the AI with its human proxies (who'd also be
> massive beneficiaries of its wealth creation strategies and ability to
> draw better future maps etc) would not evoke a huge backlash?
Brett, it depends to some extent on whether AI and nanotech coevolve.
AI+Nanotech allows people to live essentially for "free" (perhaps even
advanced biotech can pull that rabbit out of the hat). AIs alone --
if they are "owned" (now we get to start an AI "slavery" discussion) and
skilled and in demand might allow one to live for free as well.
Alternatively, nanotech alone, if there are a sufficient number of
nanotechnologists and/or computers doing the design work might
allow one to live for free. So there need not be a "huge backlash".
I also tend to disagree that one needs human proxies to support most
AI work. One does need the computer resources and interfaces to
reality but after that I think humans are out of the loop. I only
need to interface my AI to my bank and/or broker accounts and then
after that I can ignore it except to check up on things from time to
time. Of course one has to trust the AI and the interfaces but
one would hope we are getting better at developing such things.
However there are significant risks that arise if an amoral AI cracks
a virtual private network -- such as the Kazaa network and installs
itself on millions of machines. The last month has clear demonstrated
that security holes provide the means for viruses and worms to capture
millions of machines in a brief period of time.
How long before someone produces an evolving virus/worm, perhaps akin
to the various evolving SPAM messages, that can defeat the anti-virus
filters? Put an amoral AI on top of that and you potentially have a
*real* problem. One way to look at the script-kiddies of today is
to view them as really limited AIs. So look at the problems they cause
and then imagine what happens if they transfer their limited intelligence
into the millions of machines that are vulnerable.
Robert
This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 10:36:16 MDT