Re: The mathematics of effective perfection

From: hal@finney.org
Date: Wed Nov 29 2000 - 10:49:19 MST


John Clark writes, quoting Eliezer:
> >Nonetheless, if a transhuman can have "effectively perfect" self-knowledge
>
> I don't see how. Tomorrow you might find a proof of the Goldbach Conjecture
> and prove it true, or tomorrow you might find a counterexample and prove it false,
> or it might be Turing unprovable, meaning it's true so you'll never find a counterexample
> to prove it wrong but a finite proof does not exist so you'll never find a way to
> prove it's correct. You might not find a proof or a counterexample, not in
> a year, not in a million years, not in 10^9^9^9 years, not ever. You won't
> even know your task is hopeless so you might just keep plugging away at the
> problem for eternity and make absolutely zero progress. We don't know even
> approximately how this might turn out because we can't assign meaningful
> probabilities to the various possible outcomes, we don't know and can't know
> what if anything our mind will come up with. I just don't know what I'm going to
> do tomorrow because I don't understand myself very well, and the same would
> be true if I was a chimp or a human or a Transhuman.

So in your view, for an entity to have perfect self-knowledge requires
it to be able to answer any question about its future behavior, even a
question which would require infinite computing power to answer?

I don't think this is a reasonable requirement. We wouldn't expect
an entity to be able to answer questions on other topics which require
infinite computations, would we? I don't see why you view this as an
imperfection. An entity could have perfect knowledge of the rules of
the Game of Life, say, without being able to predict whether an arbitrary
pattern will stabilize or not.

Maybe it's just semantics, and for you, perfect knowledge means
omega-point levels of omniscience?

To me, the interesting question is whether an entity can have perfect
self-knowledge in the sense that it has an accurate and complete model
of its own mind. It can answer any question about the present state of
its own mind accurately (modulo trickery along the lines of "John Clark
cannot consistently assert this sentence," which shed little light on
the fundamental questions IMO). We don't have perfect (or probably even
good) knowledge of ourselves in this sense. Will future intelligences
be similarly restricted? I don't see why they should.

Hal



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:32 MDT