Re: Minds, Machines, and the Multiverse

From: Michael S. Lorrey (retroman@turbont.net)
Date: Wed May 03 2000 - 21:51:46 MDT


Matt Gingell wrote:
>
>
> On the one hand, you'd think there might be situations where a system
> emulating its own substrate could actually yield a speed increase. For
> instance, take a chip running a instruction-level simulation of
> itself. To elaborate on your example, let's say multiplication on this
> chip is very inefficient: it takes longer to multiple 5 by 6 than it
> does to do 5 adds. If I simulate multiplication by doing additions,
> then I'd get a speed increase.
>
> But this is a one time optimization: If the emulator runs another
> layer of simulation, I won't get the same increase. And I only get a
> speed up on programs using the multiplication instruction - so I don't
> have a faster chip, that is a chip that runs every program faster than
> the substrate, I only have a chip than runs a subset of programs
> faster. You always have to bottom out somewhere.
>
> There's an analogy here to data compression: There exists no algorithm
> which can be guaranteed to shave a bit off any arbitrary bit string. If
> there were, applying the algorithm enough times would compress
> anything down to zero. All you can do is map frequently occurring
> patterns down to short encodings at the expense of making less
> frequent patterns more expensive to express.

The problem with compression, which is why simulations cannot accurately
reflect a real reality, is that compression loses detail. Its lossy.
Granted there are some ways to compress data that simply a more
efficient method of organizing information or in the example above,
using one fast method to simulate another method that actually runs
slower, however at some point, you've optimized organizing and
methodology to its most concise point, anything beyond that and you have
to destroy actual data, for example, JPG images.



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:10:27 MDT