Re: SITE: Coding a Transhuman AI 2.0a

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Sun May 21 2000 - 10:46:49 MDT


Dan Fabulich wrote:

> Another kind of argument you might be making goes like this: some
> conceptual schemes are just Truly Right, independent of any purpose we
> might have for them today or ever. (This sounds more like the
> argument you're actually making.) I don't think making a statement
> like this matters. Certainly, there are some conceptual schemes which
> are just right for the purposes which we have now, and we largely have
> to assume that we're mostly Right about our beliefs and purposes. So
> we're going to have the beliefs we've got, whether we are in touch with
> the One Truth or whether they just suit our purposes.

Well, take something like geometry: is it true that the interior
angles of a triangle always add up to 180, or is that just an
arbitrary decision we made because it happens to suit our particular
purposes?

Consider this message: you're looking at a bunch of phosphors lit up
on a screen, does it have any information content outside our
idiosyncratic conventions? Independent of our desire to communicate,
is any way of perceiving it as good as any other? I would say it has
structure: that it is built from symbols drawn from a finite alphabet
and that this is true regardless of the perceivers goal.

This is where the criterion of minimum description length comes in: if
I generalize a little bit and allow pixels some fuzziness, then I can
rerepresent this message at 7 bits per symbol - which is a much
smaller encoding than a bitmap. This is a nice evaluation function
for a hypothesis because it doesn't require feedback with the outside
world. With a big enough sample I can get space savings classifying
common strings into words, and then lists into structured instances of
a grammar.

If we are to understand what intelligence is, we must construct a
definition which is not particular to a design process, a set of
goals, or a set of sensors and limbs. Implicit in this statement is
the notion that the word 'intelligent' actually means something, that
there's a meaningful line somewhere between intelligence and clever
algorithms which fake universality by virtue of shear vastness.

> With that having been said, however, I'd say you're on the wrong track
> to think that a "pure mind" abstracted from any goals would share our
> beliefs. Better to say that we've got the right intentions, we've got
> the right purpose, and that any machine built to that purpose would
> also stumble across the same means of fulfilling it as we do.

Minds are imperfect and heuristic, they only approximate a truth which
is, as you point out, uncomputable. A machine might out do us, as
Newton was out done by Einstein, by finding a better model than
ours. But any intelligent machine would have a concept of, say,
integer at least as an special case of something (perhaps vastly)
broader.

> Visions of elegance, simplicity, etc. are excellent. I share them
> with you. However, we got OUR beliefs about elegance through
> evolution; maybe that got us in touch with the one true Platonic
> Beauty; maybe it didn't. Either way, there's no reason to think that
> an AI will stumble across Ockham's Razor and find it right for its own
> purposes (which it may or may not think of as 'objective') unless it
> shares ours, (in which case, they'll have to be hand coded in, at
> least at first) because being right is no explanation for how a mind
> comes to know something. If you asked me "how did you know that she
> was a brunette?" and I replied "because I was right," I'd have missed
> your point completely, wouldn't I?

Ockham's razor would be one of the core principles of the general
purpose learning system I'm interested in - hand coded rather than
acquired, though not necessarily explicitly. Something is wired in,
obviously I don't think you can just take a blank Turning machine tape
and expect it to do something useful.

> If it's an algorithm, it's incomplete. There will be some undecidable
> questions, which are decidable on another stronger algorithm. This
> may not bother you, but it should tell you that Truth is not an
> algorithm, that it cannot be reached, or even defined,
> algorithmically.

Sure - but this isn't a practical problem anymore than the
incompleteness of number theory makes math useless. It's a good
objection though. Saying intelligence is an algorithm was sloppy of
me, I should say rather that intelligence is that which approximates
some particular uncomputable function in a tractable way. This opens
up the possibility of multiple viable solutions.

> Epistemologically speaking, how would we know if we had stumbled upon
> the general algorithm, or whether we were just pursuing our own
> purposes again? For that matter, why would we care? Why not call our
> own beliefs Right out of elegance and get on with Coding a Transhuman
> AI?

We couldn't know, but if we got good results then we'd be pretty sure
we were at least close. Whether you care depends on your motivation:
I'm interested in intelligence because I like general solutions to
problems more than I like special case ones, and you don't get a more
general solution than AI. I futz around with this stuff out of
intellectual curiosity, if the mind turns out to be necessarily ugly
I'll go do something else. I don't really care about saving the world
from grey goo or the future of the human race or whatever.

> > > Consider a search space in which you're trying to find local maximums.
> > > Now imagine trying to do it without any idea of the height of any
> > > point in the space. Now try throwing 10^100 ops at the project.
> > > Doesn't help, does it?
> >
> > You do have a criterion: The representation of a theory should be as
> > small as possible, and it should generalize as little as possible while
> > describing as many examples as possible. It's Occam's Razor. I'll read
> > up on seed AI if you agree to read up on unsupervised learning
> > (learning without feedback or tagged examples.)
>
> Ahem. And WHY do we have Ockham's Razor? I've got my story. What's
> yours? Surely not "because we're right about it"? That's missing the
> point.

We have it because it works, as a result of evolutionary feedback.
That doesn't mean it can't be captured in a simple way though - like
I said, that's one of the things I'd expect to hardwire.

> What's a raw pump? What's a pure camera? I think I don't see your point.

We can build a simple artificial heart that can serve as a reasonable
replacement for the real thing - it doesn't have to be constructed out
of self replicating machines, it doesn't require millions of years of
design work, etc. Its important property is that it moves blood around -
it's other features are incidental. My point is that the heart is a very
complicated instance of a fairly simple idea, and so might the brain be.

> I didn't intend to "call you a Nazi." One can share some beliefs with
> the Nazis without sharing all of them, and without suffering from any
> moral problems as a result. I share lots of beliefs with the Nazis,
> but I also disagree with them on a variety of substantial issues. I'm
> sure you do too. The interesting thing to note here is not that it's
> the fascists who said it, but that the distinction exists and is
> interesting.

Ah - so I sound a bit like Hitler, but that's ok since Hitler said lots
of reasonable things in addition to the not-quite-so reasonable things
he's more commonly associated with? Fair enough. I'll spare you my
righteous indignation.

(I also like beer and sausages.)

-matt



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:28 MDT