From: Dan Fabulich (dfabulich@warpmail.net)
Date: Thu Jun 05 2003 - 14:56:28 MDT
Harvey Newstrom wrote:
> I am equating technological advances and virtual reality toys. We can't
> simulate entire universes in our current existence. But the Simulators
> obviously can. Their level of virtual reality, computing power, and
> technological advancement must be immeasurably superior to ours.
> Whatever you find exciting about our escalating level of technology,
> they've been there, done that, and moved on. That's what I call
> exciting! On any continuum between pre-singularity and
> post-singularity, I would assume that the previous world is more boring
> than the later world!
Even if I concede to you that the world-as-it-is is inherently less
exciting to posthumanity, I insist that the question is whether our world
is exciting *enough*. Everything I know about the Mesozoic era tells me
that most everything about it was incredibly boring, but there are
obviously some people who spend a lot of time trying to simulate it... it
may not be exciting to me, or even "exciting" in general, but it's
exciting *enough* for some people.
> > As for the argument about suffering, I think this is argument is utterly
> > misplaced. I don't need a posthuman theodicy to think that there's a
> > reasonable chance that an interested posthuman with arbitrarily large
> > resources might rehearse the brief history of life on Earth in ver mind,
> > perhaps in great detail, and that's all it takes to get the sim argument
> > off the ground.
>
> But to give us the ability to feel pain, to make us conscious of our
> pain, and then to submit us all into the world of suffering
> needlessly.... That's hard to imagine. I don't think any of us would
> do that. If we are simulations of the Creators' ancestors, then they
> are our descendants in the future. Can we extrapolate a scenario where
> we will become transhuman, immortal, and ultimately powerful, ...and
> then recreate a world of suffering that we have ourselves escaped?
> This does not seem very likely to me.
I think you're reading too much into what it takes to make a "simulation."
Remember, we're talking about (to us) microscopic fractions of the power
of a posthuman brain. It seems to me that you're imagining a posthuman
civilization thinking about doing a simulation, deciding to do it,
architecting it, planning it in detail, fixing bugs in the plan,
implementing the plan, and then watching the results, perhaps in real
time.
But when you're talking about a millionth of the power of, say, a
Matrioshka brain, I think this picture is misleading. For example, I
invite you to think back to a time when you did or said something that was
really embarrassing. Remember how awful that felt? OK, now imagine how
things might have gone a bit differently: if you had caught yourself a few
seconds beforehand, or if you had done something praiseworthy at that
point instead. Much better, yes?
OK, now note: did you architect/plan your recollection? Did you spend
much time fixing bugs in your plan? Did you watch the results, or did you
even consider the results as part of a separate simulation mechanism?
I'd argue that, if you're like most people, you simply rehearsed the
memory in your mind, changing bits and employing your imagination as need
be, without separately *planning* a complex simulation. You *could* plan
something more elaborate, but it's so trivially easy to just "throw
something together" mentally that you probably didn't even bother.
Indeed, if you're like me, you often find yourself mentally rehearsing
emotional events (both happy and sad) by *accident*: that's how easy they
are.
Now, there's not much that I can show on the basis of this introspective
"experiment." Certainly I'd be a fool to argue that posthumans would
simulate Earth's natural history accidentally, or even that they probably
would. But the point is that OUR simulations come so quickly, easily, and
naturally to us that it seems to me that, given the trivial expenditure of
effort required from a posthuman, it's not inconceivable that one might
find vimself doing so *accidentally*. THAT's how little effort/complexity
we're talking about here.
You ask: "why would they bother?" as if it would be difficult or costly.
Similarly, you suggest that it would be morally wrong to imagine/rehearse
our history in detail, on account of the suffering of the sims. I don't
see that a posthuman would so obviously forbear from these kinds of
detailed simulations merely on account of the suffering of the sims: when
it gets to be *that* easy, why bother stopping? Would you simulate an ant
farm that included ants that reacted to simulated pain?
> > This
> > argument amounts to saying that in most sim universes, movies like The
> > Matrix would be prevented by physics itself. A cool superhero movie would
> > be possible, but considering the brain-in-a-vat case wouldn't...? This
> > seems absurd. Why wouldn't they just let us consider the possibility
> > without giving us a practical use for this information?
>
> But this would break the simulation. Because in the real world, people
> might ponder such things uselessly. They aren't in a simulation, so
> none of their attempts to detect the simulation, prove it, and break it
> would work. But in a simulation, these very same actions could cause
> problems. We might discover proof and redirect the history of the
> simulation as we demonstrate this to everybody. We might find a way to
> hack the simulation and start reprogramming it or try to communicate to
> the real world. If we couldn't get that far, we could all just commit
> suicide to end the simulation, or all act inappropriately to ruin the
> historical recreation, or overload the system with a denial of service
> attack by doing a lot of things that are much harder to calculate and
> simulate. These actions must be possible, or, they must be prevented by
> security features in the simulation. If these security features are
> there, we should be able to detect them as our attempts fail. But it
> seems much simpler and more direct in design if we were merely prevented
> from having the idea of hacking our way out of the simulation in the
> first place.
No, no, no. All that's needed here is an incapacity to *detect* that
we're in a simulation. You don't need any more anti-hacking mechanisms
than that, if your obfuscation mechanism is in order. THAT'S by far the
most "direct" design, especially if human behavior is, itself, what you're
trying to simulate. Again, what the hell kind of crappy simulation are we
talking about here if movies like the Matrix could never get made, or even
thought about?
> I find that argument to be unconvincing. If we can't effect how the
> simulation is being run by getting the interest or attention of our
> Simulators, then it would be futile to alter our behavior. On the other
> hand, if they did tweak the simulation based on our actions, we would
> have examples of miracles or non-sequiturs all the time. Maybe they
> could carefully modify the game without it being noticed. But this
> would change the argument from a simulation to a directly-controlled
> scenario or game. I think such control would make our universe appear
> less arbitrary or more directed or would be detectable in some way.
Not that I think that we should take this seriously, but if the wrong kind
of theist heard this argument, I think they'd take precisely the opposite
point here. ;)
Still, I agree that it's almost certainly futile to follow Hanson's advice
in a detailed long-term posthuman simulation.
> You need a professional hacker! They are great for figuring out how to
> detect, probe, evaluate, and ultimately manipulate remote unseen
> computing forces by using ordinary communications and interactions in
> such a way as to get unexpected results. If we are in a simulation, a
> hacker should be able to figure it out.
>
> (Hmmm... unless the hackers capable of doing this are programmed to
> disbelieve that we are in a simulation so they never try! Nah...!)
I think you put too much confidence in mortal hackers. ;) Don't get me
wrong, some of our hackers are quite clever, but posthumanity is expected
to be, you know, much cleverer.
Think of it this way: could we build a simulation so good that a dog
couldn't tell the difference? What about the brightest of dogs, who like
to dig holes under fences and find lost objects? Does it matter if we
switch to cats here? ;)
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Thu Jun 05 2003 - 15:06:47 MDT