On Mon, 02 Oct 2000, Eliezer Yudkowsky wrote:
> Looking over Eugene's posts, I begin to become confused. As far as I can
> tell, Eugene thinks that seed AI (both evolutionary and nonevolutionary),
> nanotechnology, and uploading will all inevitably end in disaster. I could be
> wrong about Eugene's opinion on uploading, but as I recall Eugene said to
> Molloy that the rapid self-enhancement loop means that one-mind-wins all even
> in a multi-AI scenario, and presumably this statement applies to uploading as
> well.
While I don't speak for Eugene, I think I understand what his primary
concern is.
>From most of what I have seen thrown about, most singularity/AI
development plans appear to have extremely half-assed deployment schemes,
to the point of being grossly negligent. Most of the researchers seem
largely concerned with development rather than working on the problems of
deployment. The pervasive "we'll just flip the switch and the universe
will change as we know it" attitude is unnecessarily careless and arguably
not even a necessary outcome of such research, but if the attitude persists
all sorts of untold damage may happen as a result. As with all
potentially nasty new technologies, you have to run it like a military
operation, having a vast number of contingency plans at your disposal in
case things do go wrong.
I personally believe that a controlled and reasonably safe deployment
scheme is possible, and certainly preferable. And contrary to what some
people will argue, I have not seen an argument that has convinced me that
controlled growth of the AI is not feasible; it has to obey the same laws
of physics and mathematics as everyone else. If our contingency
technologies are not adequate at the time AI is created, put hard resource
constraints on the AI until contingencies are in place. A constrained AI
is still extraordinarly useful, even if operating at below its potential.
The very fact that a demonstrable AI technology exists (coupled with the
early technology development capabilities of said AI) should allow one to
directly access and/or leverage enough financial resources to start
working on a comprehensive program of getting people off of our orbiting
rock and hopefully outside of our local neighborhood. I would prefer to
be observing from a very safe distance before unleashing an unconstrained
AI upon the planet.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT