Re: Singularity optimization [Was: Colossus and the Singularity]

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 17:22:48 MST


Anders Sandberg wrote:
>

> Hmm? Massive parallelism doesn't solve the problem of combinatorial
> explosions, they just delay them a little bit. And combining together
> branches in complex ways is expensive, and combining them in a sloppy
> analogous fashion only works as long as the environment you are
> working in has useful regularities. If the space of AI programs has
> some simple regularity you can discover, then it is possible to use
> this to make progress more efficient. But if it doesn't have this
> regular structure, then analogies will not be efficient. I believe,
> given the kind of complexities a space of programs entails, that it is
> much more likely that this holds for the AI problem.
>

As a long term programmer I do not believe that the programming space
does not have useful regularities. I commonly exploit such in my own
work.

> It is a bit like Chaitins proof that most computer programs are not
> compressible and essentially random; if you choose a program at random
> it will likely be messy and the effort to understand it equal to the
> number of bits in the program. When we program we build stuff from
> modules, using small pieces of more irreducible code to build larger
> systems. That is a nice subspace of the program space, but we have no
> evidence AI is inside it, especially not the road towards greater and
> greater AI.
>

This is not my experience at all. If a program has been written by even
a moderately talented engineer the effort needed to understand it is not
at all proportional to its size in LOC or bits. It is true however that
having just the source or just the machine instructions makes it
difficult to extract the semantic meaning and intent.

We have no evidence that AI is not within a space containing high
modularity and reuse. Especially if we use the human brain as an
example of the type of intelligence we are after.

> Analogies work well for us humans because we usually live in the same
> environment, which does have a fairly regular structure. If you move
> into a mathematical environment analogies become noticeably less
> efficient, at least in some mathematical branches.
>

That analogies work less well does not mean that only chaos remains. It
generally (in mathematics) means that more specialized regularities not
analagous to other spaces apply. I am not a mathematician but I know of
no branch of mathematics that is utterly devoid of applicable analogies.

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:26 MDT