Re: SITE: Coding a Transhuman AI 2.0a

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Mon May 22 2000 - 18:39:05 MDT


Dan, who at a genetic level is little different from Stalin, wrote:

> Matt Gingell, stinking Nazi [;)] wrote:
>
> > Well, take something like geometry: is it true that the interior
> > angles of a triangle always add up to 180, or is that just an
> > arbitrary decision we made because it happens to suit our particular
> > purposes?
>
> Arbitrary decisions! In non-Euclidean geometries, the angles of a
> triangle may NOT add up to 180. And HOW do we decide which geometry
> to use in a given situation? Those darn *goals* again! If only we
> could rid them from our thought; then we'd see the Forms at last.

You seem to think a model is right because it's useful. That's
backwards: A model is useful because it's right. Concepts reflect
regularities in the world - their accuracy is a function of their
correlation to fact. That our aims are well served by a accurate
conception of the world is true but beside the essential point.

Certainly there's an issue of perspective - if I live near a black
hole or move around near the speed of light, my model of reality would
be different. I would never claim otherwise - I'd only expect a
machine to develop concepts similar to my own if it's experience were
also similar. If I'm a picometer tall and you're a Jupiter brain,
neither of our world views is wrong - we are just each the other's
special case. The same holds for hyperbolic vs classical geometries -
neither is arbitrary: they're just describing different things. (Or rather
the first is a generalization of the second.)

Goals do effect attention (While I think they're essential arbitrary
and uninteresting, I admit we have them.) Goals effect what you choose
to spend your resources contemplating - there being more things in
heaven and earth than dreamable at our speed C. A learning machine
without a goal is like a Zan master, sitting still, hallucinating,
sucking up fact and building purposeless towers of abstraction. The
mind is a means and not an end. Our drive to perpetuate ourselves and
our species is an artifact of our evolutionary history, our will to
survive is vestigial and as random as an appendix. Intelligence is an
engine perched on an animal - the forebrain being a survival subsystem
for an idiot limbic blob. Plug in some other root set of desires and
it'll as usefully tell you how to castrate yourself as how to spread
your genes. It'll identify cliffs I can jump off as faithfully as it
does wolves to run away from.

If motivation is made up but reality isn't, then it seems better to
describe the mind as something that parses the real world than a hack
that keeps you alive.

> > Consider this message: you're looking at a bunch of phosphors lit up
> > on a screen, does it have any information content outside our
> > idiosyncratic conventions?
>
> Language is the *quintessential* example of a set of idiosyncratic
> conventions. If this weren't true, then our language would have to
> be just as it is, necessarily. I don't think you'd want to make that
> strong a claim.

I wouldn't claim that the semantics, the ideas, I'm writing would be
understandable, even in principle, by anyone but an English
speaker. Yet there is still information content - it isn't random
noise (Even if it occasionally sounds a bit like it). The structure -
that is the characters, words, simple syntax, etc - are
extractable. Whether anybody would bother to investigate it is one
question, but that the structure is real and determinable, and is
independent of goals or survivability or what not, is unambiguously
true.

I've done a little bit of work on computer models of language
acquisition - the problem for a child is turning examples of speech
into general rules, inferring grammars from instances. It's a bit like
trying to turn object code back into source, figuring out structures
like for-loops from an untagged stream of machine instructions. Not
entirely unlike trying to unscramble an egg... That we are able to do
it at all, even to the controversial extent language is actually
really learned, amazes me.

Out of curiosity, how would you explain Kasparov's ability to play a
decent game of chess against a computer analyzing 200 million
positions a second? Certainly not a regular occurrence on the plains of
Africa, what more general facility does it demonstrate?

> ...

> > If we are to understand what intelligence is, we must construct a
> > definition which is not particular to a design process, a set of
> > goals, or a set of sensors and limbs. Implicit in this statement is
> > the notion that the word 'intelligent' actually means something, that
> > there's a meaningful line somewhere between intelligence and clever
> > algorithms which fake universality by virtue of shear vastness.
>
> I reject the notion that arguing against you would require me to
> conclude that the word "intelligent" is meaningless. On the contrary,
> I argue that the word "intelligent" does have meaning *to us*, in the
> language *we* speak, *today* at the beginning of the 21st century.
> Your assertion requires me to believe that this word somehow has
> meaning beyond our language, beyond us. It requires "intelligence" to
> be something transcendent, rather than simply meaningful.

This is a very anthropomorphic view - I'm looking for a definition
that transcends humans and evolution, the essential properties shared
by all possible intelligences. You seem to be saying there isn't such
a thing - or it's the null set.

> It is in coming to terms with the fact that intelligence is not
> transcendent that AI will finally get off the ground. We'll finally
> start coding in the stuff that we'd hoped the AI would find simply by
> virtue of it being true. ("...and, since it's preloaded with the
> general truth finding algorithm, it'll SURELY find it eventually,
> given enough computing power, time, and most of all GRANT MONEY...").

You can go ahead and start coding, bang out behaviors for all the
situations you want - write vision systems, theorem provers, cunningly
indexable databases - but without an understanding of the principles
at work all you'll end up with is a undebuggable heap of brain damaged
cruft.

(There should probably be an IMHO in there somewhere... Pinch me, I'm
pontificating.)

> ...

> > > Epistemologically speaking, how would we know if we had stumbled upon
> > > the general algorithm, or whether we were just pursuing our own
> > > purposes again? For that matter, why would we care? Why not call our
> > > own beliefs Right out of elegance and get on with Coding a Transhuman
> > > AI?
> >
> > We couldn't know, but if we got good results then we'd be pretty sure
> > we were at least close. Whether you care depends on your motivation:
> > I'm interested in intelligence because I like general solutions to
> > problems more than I like special case ones, and you don't get a more
> > general solution than AI. I futz around with this stuff out of
> > intellectual curiosity, if the mind turns out to be necessarily ugly
> > I'll go do something else. I don't really care about saving the world
> > from grey goo or the future of the human race or whatever.
>
> That's a little cavalier, considering that you're one of us, isn't it?
> ;) Sure, sure, let the rest of us do the HARD work... ;)

Well, we acknowledge that trenches have to be dug, but I don't suppose
either of us would with doing it for a living very rewarding. Better
dead than ugly, I always say.
 
> Anyway. I think you're forgetting that this is the general truth
> finding algorithm which WE supposedly use in finding truth. So how
> would we know if we'd found it? We'd "check" to see if our results
> were good, and if so, we're close, you say. But how would we "check"
> on this? Well, we'd run our truth finding algorithm again, of course,
> since that's all we've got in the search for truth. According to it,
> our results are "good." Have we found the general truth finding
> algorithm? Well, the general truth finding algorithm seems to say so!

The answer to that question is 42. I could give you a reasonable,
logical argument that logic and reason are a good way at looking at
the world, but that would be circular. (Though if we're not assuming
logic, maybe there's nothing wrong with a circular argument...)

There's a quote that comes to mind, Jack Handey or someone, along the
lines:

"I used to think my brain was the most important part of my body. But
then I was like, hey, consider the source."

-matt



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:31 MDT