"Robert J. Bradbury" wrote:
>
> I think this fundamentally comes down to a core extropian
> principle involving rational thought. Rational people
> presumably seek to preserve themselves...
Prove it. Rational people act from rational motives, not from arbitrary assumptions.
> This devolves into 2 basic discussions:
Almost certainly. If it really is smarter-than-human - say, twice as
smart as I am - then just the fact that it's running in a Turing
formalism should be enough for it to deduce that it's in a simulation.
> (a) whether an AI can discover it is running in a simulation?
You really can't outwit something that's smarter than you are, no matter how hard you try. Could a nineteenth-century scientist figure out what precautions would be necessary? So why did this suddenly become possible in your generation?
> (b) whether people (the irrationals) who are willing to sacrifice
> themselves can/will create non-simulation environments in
> which to evolve AIs.
Yes, we are, your biased terminology to the contrary. I know exactly why I get up in the morning. I could program it into an AI. So who's irrational?
> Many thanks to Skye for providing an interesting solution to
> a thorny problem and apologies to the list if this has been
> hashed through before.
Well, it has, actually.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way