"Robert J. Bradbury" wrote:
>
> Even I, as much as I like nanotech, found myself being
> a little worried about the problems it seemed to raise.
> It points out however, a potential flaw in the friendly
> AI scenario. Even if humans become this nice benevolent
> race, cared for by this friendly SysOp who make sure that
> nobody within the system can do harmful things to the system,
> that doesn't prevent someone from outside the system from
> inventing the mindless entity that has no moral system
> and whose sole purpose of being is to "replicate".
There's a difference between arguing perfection, which is probably but not
certainly impossible, and arguing non-suboptimality. Do you have some
reason for believing that a Friendly SI is suboptimal for dealing with
invading replicators? I don't see why it would be better or worse than
any other SI that happened to have that subgoal. Friendly does not mean
cute.
I do think that (single SI) >= (system of SIs) when it comes to dealing
with invasions and grey goo, since a single SI can easily emulate a system
but perhaps not the reverse. I don't see that Friendly or unFriendly
would impact tactical effectiveness.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:27 MDT