On Sat, 25 Nov 2000 email@example.com wrote:
> Eliezer writes:
> > Turing's diagonalization theorem can be expressed as follows: A mind
> > cannot perfectly describe itself because there is always, inescapably,
> > some part of the mind which at that moment is observing and is not itself
> > being observed. The observer is always smaller than the observed, and
> > thus cannot perfectly describe it.
> I'm not convinced of this argument. It sounds like it could equally well
> "prove" that self-reproducing automata are impossible, because they have
> to have a model of themselves, and the model is always smaller than
> the total automaton, hence there must be part that can't be modelled.
> When people first go to write a self reproducing program they run into
> this, and in some cases they may conclude that it is impossible, if
> they don't stumble onto the trick. Yet von Neumann showed and biology
> confirms that self reproduction is entirely possible.
> I'd say the reason is that the model is isomorphic (via some mapping)
> to the system minus the model, and so adding the model does not increase
> the information in the system.
> Couldn't a mind's mental model share the same property?
well, in the case of self-reproducing automata, "there's no one
home." _our_ minds (external to the automaton) can distinguish that the
automaton is performing perfect self-replication, but it is far from
obvious that the automaton itself enjoys the same degree of
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:31 MDT