Re: Preventing AI Breakout [was Genetics, nannotechnology, and ,

Eliezer S. Yudkowsky (
Mon, 25 Oct 1999 09:25:36 -0500

Anders Sandberg wrote:
> "Eliezer S. Yudkowsky" <> writes:
> > > (a) whether an AI can discover it is running in a simulation?
> >
> > Almost certainly. If it really is smarter-than-human - say, twice as
> > smart as I am - then just the fact that it's running in a Turing
> > formalism should be enough for it to deduce that it's in a simulation.
> So if the Church-Turing thesis holds for the physical world, it is a
> simulation?

In one sense, yes. But (1) if the world I saw was Turing-computable, I probably wouldn't see anything wrong with it - *I'm* not that smart. Or perhaps I underestimate myself... but nonetheless, the only way I learned how to reason about the subject was trying to explain phenomena that weren't Turing-computable, i.e. qualia. And (2) if *this* world is Turing-computable, then obviously all my reasoning is wrong and I don't know a damn thing about the subject.

> If the AI runs on a Game of Life automaton, why should it believe the
> world is embedded in another world? The simplest consistent
> explanation involves just the automaton.

But the explanation isn't complete. Where did the automaton come from?

> > You really can't outwit something that's smarter than you are, no matter
> > how hard you try.
> Ever tried to rear children? Outwitting goes both ways.

Someone tried to rear me. Perhaps I flatter myself, but my experience would tend to indicate that it only goes one way.

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way