Nick Bostrom wrote:
>
> I agree that this can be a greater problem when we are talking about
> ~human-level AIs. For such entities, however, there should be more
> standard safety-measures that would be adequate (confinement, or
> having a group of people closely monitor their actions). The
> potential danger would only arise with seriously superhuman
> malicious intellects.
The goals still have to last into, and beyond, the seriously superhuman stage, then. Which, if you use the "arbitrary" design mindset, they won't. The AIs will keep changing on you and you'll keep shutting them down.
Also, I don't think we're smart enough to understand what a middle-stage <H (but still mostly self-designed) seed AI is doing. Remember, they have a codic cortex and we don't; it would be almost exactly like a blind man trying to understand a painting, pixel by pixel.
> That depends. If selection pressures lead to the evolution of AIs
What selection pressures? Who'd be dumb enough to create an AI wanting
to survive and reproduce, and, above all, *compete* with its children?
> with selfish values that are indifferent to human welfare, and the
Absolutely. I do not intend to let humanity be wiped out by a bunch of
selfish, badly programmed <Hs; there would still be the probability that
> AIs as a result go about annihilating the human species and stealing
> our resources, then I would say emphatically NO, we have a right to
> expect more.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.