Re: Preventing AI Breakout [was Genetics, nannotechnology, and , programming]

Anders Sandberg (asa@nada.kth.se)
25 Oct 1999 13:52:05 +0200

"Eliezer S. Yudkowsky" <sentience@pobox.com> writes:

> > (a) whether an AI can discover it is running in a simulation?
>
> Almost certainly. If it really is smarter-than-human - say, twice as
> smart as I am - then just the fact that it's running in a Turing
> formalism should be enough for it to deduce that it's in a simulation.

So if the Church-Turing thesis holds for the physical world, it is a simulation?

If the AI runs on a Game of Life automaton, why should it believe the world is embedded in another world? The simplest consistent explanation involves just the automaton.

> You really can't outwit something that's smarter than you are, no matter
> how hard you try.

Ever tried to rear children? Outwitting goes both ways.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y