Re: Let's hear Eugene's ideas

From: James Rogers (jamesr@best.com)
Date: Mon Oct 02 2000 - 15:28:54 MDT


On Mon, 02 Oct 2000, Brian Atkins wrote:
>
> Well as I said to Eugene- look around at the reality of the next 20 years
> (max). There are likely to be no Turing Police tracking down and containing
> all these AIs that all the hackers and scientists out there will dream up.
> And secondly, this whole "get into space" idea is also completely unlikely
> to happen within the said timeperiod. Do you have any /realistic/ ideas?

I am not a proponent of Turing police or even regulation; besides being a
fruitless strategy, I disagree with it on principle. I don't even really
care what an AI is used for as long as it doesn't bother me. However, I
am not at all convinced that someone won't end up pulling a fantastically
stupid stunt given the opportunity; note that I apply this to a whole
range of technologies, not just AI. I wouldn't bother to guess what that
stunt may be, but there certainly would be plenty of opportunity to
encounter new classes of unpleasant problems with AIs running around.
Unlike many other technologies though, it would be far easier to
inadvertently make a mess with AI. Ambitious monkeys like to push the
envelope of questionable usage first, *then* decide whether it was a good
or moral idea in the first place.

The evolution of the situation is a bit more complex than has generally
been presented (Hal Finney touched on this well IMO), and could very well
end up with some difficult trajectories.

For example, one plausible trajectory is that an AI arms race will occur.
I expect AI intelligence levels to follow a (relatively) slow
evolutionary pattern, since I don't forsee the development of a
super-intelligent AI overnight nor see any reason why this should be the
case. The strategic advantage of having an AI that expands at the most
voracious pace a government can manage (yet still theoretically control),
with the goal of bolstering its position at the detriment of other
governments' AIs would be an interesting and highly dynamic situation.

While I am not a big space nut by any means, I find something wrong with a
picture where one has the resources of a good AI (and all the capital that
would go along with that) but still can't find a way to get off the planet
in a sustainable manner.

> For the record, as Eliezer described, SIAI does not plan for the kind of
> half-assed deployment scheme you indirectly attribute.

Good for you. When I look at the history of technological development and
the various evolutionary, social, and political trajectories those
technologies actually took, the landscape isn't as clear cut as many of the
implicit assumptions we use would make it seem. I wasn't trying to single
anyone out. I was trying to make the point that historically most
scientists have a very poor track record for having a technology follow
their intent rather than the intent of those with buckets of money and
lots of power (such as governments). Having a comprehensive deployment
strategy and contingency plan well ahead of time goes a long way towards
heading off these types of issues.

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT