Re: AI: Relative difficulty

Anders Sandberg (nv91-asa@nada.kth.se)
Wed, 22 Jan 1997 18:29:59 +0100 (MET)


On Tue, 21 Jan 1997, Eliezer Yudkowsky wrote:

> [Anders Sandberg:]
> > How would you build a world where 1+1=3? It would be rather inconsistent,
> > and thus unlikely to work (although the experiment could work just as
> > well with a child in a inconsistent VR, of course). A better example
> > would be a world with hyperbolic geometry or objects that are not persistent.
>
> Of course it would be inconsistent! One and one *don't* *make*
> *three*! Or at least WE think so.

"How would this sentence look if Pi wasn't 3?" (printed with hexagonal 'o's)
:-)

> This would actually be best by an
> iterative process [deleted for brevity]

Yes, although it seems somewhat pointless to me - 1+1=3 isn't a worthy
goal, it is just a random goal. A more interesting series of worlds would
be higher and higher dimensional.

> The question is: Can [1+1=3] *ever* appear *completely* consistent,
> cognitively, thanks to adaptation by the child... or will our built-in
> processes of visualization interfere?

Others have answered this, but you point out an interesting question: how
far can we adapt our basic mind-template? Are there narrow limits to what
human brains can think about or perceive in some "conceptual directions"?
Let's ignore obvious limits of complexity (like concepts that require you
to hold 100 sub-concepts in mind at the same time or structures that are
larger than human memory storage).

Of course, if you are a Strong AI adherent like me, then you have to admit
that Gödel in principle applies to the human mind, and we have Gödel
strings...

-----------------------------------------------------------------------
Anders Sandberg Towards Ascension!
nv91-asa@nada.kth.se http://www.nada.kth.se/~nv91-asa/main.html
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y