Re: Let's hear Eugene's ideas

From: J. R. Molloy (jr@shasta.com)
Date: Tue Oct 03 2000 - 12:16:22 MDT


James Rogers wrote,

> For example, one plausible trajectory is that an AI arms race will occur.
> I expect AI intelligence levels to follow a (relatively) slow
> evolutionary pattern, since I don't forsee the development of a
> super-intelligent AI overnight nor see any reason why this should be the
> case.

An AI arms race would supply the motive for speeding up development of high
level AI. Using genetic programming and evolvable algorithms, the speed at which
AI breeds tends to accelerate, along the same lines as the technological
singularity at large. That could supply the reason why SI emerges overnight.

> The strategic advantage of having an AI that expands at the most
> voracious pace a government can manage (yet still theoretically control),
> with the goal of bolstering its position at the detriment of other
> governments' AIs would be an interesting and highly dynamic situation.

Interesting yes, and paradigm shattering as well. As others have pointed out,
humans rule on Earth because they are the smartest living creatures -- or
because they sucessfully convince others that they *ought* to rule. When
artificial life breeds greater than human level intelligence, those who believe
in the rule of intelligence will need to switch their allegiance to the SI. No
one wants someone less intelligent than themselves for a boss (despite the fact
that in my opinion -- and that of Scott Adams -- managers are often less
intelligent than the people they supervise).

--J. R.

"Violence is the last resort of incompetence."
--Collie Flower



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT