**Next message:**Emlyn: "Re: the "not to be born" right"**Previous message:**Jason Joel Thompson: "Re: Conscious Machines"**In reply to:**Eliezer S. Yudkowsky: "Re: The mathematics of effective perfection"**Next in thread:**John Clark: "Re: The mathematics of effective perfection"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

[Non-member submission]

* > > The door is open for a real mathematical
*

* > > analysis of
*

* > > the lossiness and error-prone-ness of knowledge and inference
*

* > in minds of
*

* > > various sizes.
*

* >
*

* > I don't see how you can possibly do this. Given a specific physical
*

* > system, you can estimate the error rate of the underlying processes and
*

* > demonstrate that, e.g., there is a 10^-50 chance of a single error
*

* > occurring in ten thousand years of operation. I don't see how you could
*

* > possibly get specific numbers for software errors in a program as complex
*

* > as Webmind, much less a human, much less a totally unspecified transhuman.
*

You would need a branch of mathematics that does not currently exist,

i.e. a real "complex systems science"

* >
*

* > There will never be a real mathematical theory of mind that is less
*

* > complex than a mind itself; no useful part of the mind will ever be
*

* > describable except by all the individual equations governing
*

* > transistor-equivalents or neurons.
*

That is entirely ridiculous, in my view. I have a mathematical theory of

parts of Webmind, and I know mathematical theories that explain how parts of

the human mind work (so far not the most interesting parts) ... see the

Journal of

Mathematical Psychology (at least 10% of the papers in there are of real

value ;)

You can say "There will never be a mathematical theory explaining the exact

dynamics

of a particular mind that is less complex than that mind itself" -- but so

what.

Even physics doesn't explain the exact trajectory of a pendulum. One is

seeking approximate

explanations, or more technically "probably approximately correct" (PAC)

explanations...

We have theories of mind that we use all the time in practical dealings with

other beings.

Autistic people lack these; chimps have only simple ones. I see no reason

why a mathematical

theory of mind can't bring together the kind of practical heuristics we use

in everyday mind-modeling

with the empirical data analysis we have in mathematical and statistical

psychology.

I don't have such a theory right now -- though I've tried to sketch the

outlines in my books Chaotic

Logic and From Complexity to Creativity -- but you certainly haven't given

me a reasoned argument

as to why such a theory can't be achieved.

As for getting precise numbers, why do you assume these are the only

possible outcome of a

mathematical theory? Much of mathematics is non-numerical in fact --

algebra, topology, logic....

Certainly, one would expect to be able to prove things about the probability

that a system's error

rate lies in a certain interval, much more so than to be able to derive

exact numerical values

describing complex system states.

* > The wish for such a theory is simply
*

* > physics envy.
*

It seems to me that, rather, your view of what a mathematical theory ~is~ is

overly constrained by

what physics theories are.

* > We live in a world where physical systems turn out to
*

* > exhibit all sorts of interesting, mathematically describable high-level
*

* > behaviors; but neither the biology of evolved organisms, nor the behavior
*

* > of evolved minds, nor the computer programs we design, exhibit any such
*

* > tendency. If you took the sum of all the numbers in a computer's RAM and
*

* > plotted it over time, you might find that it danced an airy Gaussian
*

* > minuet around a mean, but I don't think you will ever find any behavior
*

* > more interesting than that - there is no reason why such a behavior would
*

* > exist and no precedent for expecting one. Mathematics is an
*

* > extraordinarily powerful tool which will never be useful to cognitive
*

* > scientists. We'll just have to live with that.
*

I'm finding it hard to resist using unpleasant language in response to this

very

silly statement.

How can you make such an emphatic, authoritative-sounding declaration

regarding a topic that you obviously know very little about!!

Why the hell would you assume that statistical analysis is the only

mathematical analysis

one can do of digital or biological brains? You can do more interesting

stuff than that with EEG's

for Christ's sake. Ever study Walter Freeman's work on the chaotic dynamics

of the rabbit brain?

This is a long way from statistics.

We now lack the data to study the dynamics of human brain/minds in detail,

but in 20 years when

we have PET or fMRI scans (which give spatial maps of the active regions in

the brain),

with the temporal resolution of EEG's, we'll have the ability to make 3D

movies of brain function,

and to apply intelligent data analysis techniques to these. I'm willing to

bet you anything you

want that this will yield to a lot more interesting things than are revealed

by simple statistical

analysis of individual numerical parameters.

In digital brains, the situation is much better, because we don't need fancy

quantum resonance

scans to measure a system's internal state. We can generate vast tables of

numbers by logging the states

of the components of our AI systems. These tables of numbers, I can tell

you from practical experience,

DO reveal interesting patterns going far beyond simple statistics, from

which it is possible to infer

interesting approximative models of mind structure/dynamics. See Jim

Crutchfield's work on

"Computational Mechanics" for a review of one approach to automatically

inferring computational models of complex

systems from time series data -- my own approach to this problem is subtler

and more practical than his,

but along similar lines.

* > > The point is: We have sought out tasks that strain our cognitive
*

* > > architecture.
*

* >
*

* > Yes. And, at our current level of intelligence, staying sane - or rather,
*

* > developing a personal philosophy and learning self-knowledge - strains
*

* > these abilities to the limit.
*

I think this is a confusion. Self-knowledge, in my view, will always stress

a system to the limit, because of its peculiar self-referential nature.

True self-knowledge is never possible... and the more complex you become in

order

to solve the problem of modeling yourself and others, the harder you become

to know

due to your increasing complexity...

* > We have no ability to observe individual neurons.
*

Actually, we can track the state of an individual neuron pretty well, the

problem is monitoring

more than a couple thousand at once, using current technology -- which will

surely be superseded

in a few decades...

* > Now, you can make up all kinds of reasons why AIs or transhumans might run
*

* > into minor difficulties decoding neural nets, failing to achieve complete
*

* > perfection, and so on. But it looks to me like simple common sense says
*

* > that, if we humans had all these abilities, we would have achieved vastly
*

* > more than we have now. No, we still might not be perfect. But we would,
*

* > at the very least, be vastly better. And, given enough time, or given
*

* > transhuman intelligence, we might well achieve effective perfection in the
*

* > domain of sanity.
*

To me, your leap from "vastly better" to "effective perfection" is a huge

and

unjustified one..

We are vastly better than ants in many senses, but nowhere near perfect in

any

objective sense...

* > I never wanted to ignore "cognitive science" issues. The whole point of
*

* > the post was to take the ball out of the mathematical court and drop-kick
*

* > it back into the cognitive one.
*

Yes, where differ is then in our intuition about the relationship between

mathematics and cognitive science. I think that a new kind of math is

needed to

transform cognitive science into a real science. Perhaps AI's will help us

create

this new kind of math.

* >
*

* > Of course. But, though perhaps I am mistaken, it looks to me like your
*

* > beliefs about transhumanity have effects on what you do in the
*

* > here-and-now. Certainly your projections about transhumanity, and your
*

* > actions in the present, spring from the same set of underlying
*

* > assumptions.
*

* >
*

Well, the fact that I believe transhumanity is possible has a huge impact on

my

life, since I spend a lot of my time actively trying to create it. The fact

that

I believe transhumanity will probably be a good thing also has an impact...

otherwise

I wouldn't try to build it even though I think I know how.

Beyond that simple level, my particular intuitions about transhumanity don't

affect my work

very much, and certainly not my personal life...

ben

**Next message:**Emlyn: "Re: the "not to be born" right"**Previous message:**Jason Joel Thompson: "Re: Conscious Machines"**In reply to:**Eliezer S. Yudkowsky: "Re: The mathematics of effective perfection"**Next in thread:**John Clark: "Re: The mathematics of effective perfection"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b30
: Mon May 28 2001 - 09:50:31 MDT
*