Re: Jaron Lanier Got Up My Shnoz on AI

From: Steve Nichols (
Date: Sat Jan 19 2002 - 16:53:51 MST

Date: Fri, 18 Jan 2002 08:58:44 -0800
From: James Rogers <>
Subject: Re: Jaron Lanier Got Up My Shnoz on AI

On 1/17/02 8:05 AM, "Steve Nichols" <> wrote:
> But the brain processes are nothing like von Neumann/ finite state computers!
> For a start the synapses can rewire (in hardware, infinite-state!) in response
> to situations & problems. Do you deny this? And what structures in the brain
> resemble anything like CPU or RAM chips? Not to be found, because brains and
> brain processes are nothing like ordinary computers (doh).

They human brain doesn't have to operate like modern silicon, only be
mappable to it. There are some basic mathematical criteria for what can and
can't be mapped to silicon, and the brain definitely looks like it can be
mapped. It is utterly irrelevant if it uses very different hardware because
the results are all that matter.

*new* That is like saying that a drawing of the pyramids is the same
thing as the actual pyramids! Remember, this discussion ios about sentience,
not computation. It seems fairly obvious that glial cells &c in brains
play some role in felt experience, but these have no discernible
role in computation & information processing.

>> Input and output results are all you need to duplicate ANY finite state
>> machinery. Universal FSM copying algorithms create structures that look very
>> similar to neural networks, not like the more conventional algorithms that
>> most people who work with software are familiar with. With "weight states"
>> and everything.
> What about hidden units????????

Hidden units are again irrelevant. If they don't modify the input at the
output, then they are computationally null. If they ARE modifying the
output, then they will be modelled automatically just like everything else.
What is the value of a hidden unit if it has no measurable impact on the

*new* The hidden units are a good example of how serial emulation of
massively parallel processing FAILS to be able to replicate all the
processes. Algorithmically you can just look at outputs, inputs and
architecture, but cannot actually duplicate the processing. It is wrong
to say hidden units have no measurable impact, because if absent the
neural net will not function (the same/ as well).

> All serial computers can do is model or simulate mpd function ... the
> lock-step mechanism cannot be replicated exactly in serial because in parallel
> all nodes are updated AT ONCE, this is why it is called parallel, whereas
> lock-step simulation in serial just does one process at a time, linearly. And,
> again, what about hidden units?????

There are an infinite number of algorithms that can implement any process.
Inputs mapping identically to outputs is the definition of an equivalent
algorithmic form. Hardware and software architecture are irrelevant.

*New* This is your big problem. I might agree with you if we are
talking about "computation" but we are discussing potential for
sentience, so you are completely off the mark! The whole point I am
making is that a parallel/ wetware system can compute more flexibly,
and switch between what information is handled as neural information
(software if you must) ... virtual ... and what problems are tackled by
more fundamental resetting of logic, more permanent changes to
system, by reorganising hardware/ physical connections.

Serial computers just cannot do this, they just run algorithms
keeping their hardware architecture stable and distinct. Sure, both
systems can solve the same problems, but finite-state, digital
systems don't have any 'awareness' or self-referentialness. Self-
referential systems can modify THEMSELVES .... ** end New**

Incidentally, modelling a parallel process done serially checkpoints at the
same exact state as the truly parallel model. The only difference between
parallel and serial in this case is that the state transitional steps are
computed differently. Therefore, the results are computationally

*New* So what ... computational equivalence isn't the issue.
You seem to be saying that a robot-actor in a play, whose
behaviour is entirely conditioned as responses to programming,
is in fact identical to the human-actor in the same role, because they
both deliver the same roles! Is this really what you think, are you
a behaviourist ???????? *end new*

It may go against whatever intuition you have, but if
you actually study the problem you'll find that parallel and serial
computation are computationally equivalent and interchangeable.

*New* So you only arguing the limited case for COMPUTATIONAL
equivalence (which I dispute as well, neural nets can interpolate,
learn and extrapolate from new information not met previously).
You make no case for "experiential" equivalence, which is what
we are talking about ... so my point is proved. *end New*

> Not only just very different, but nothing like! You can't go the other way and
> extract the "algorithm" being used by brains because the brain changes its
> wiring, not just weight states. Also, how can u reproduce graceful degradation
> in serial? If the bit stream is disrupted u r f**c**d!

Again, the computational process is irrelevant. Equality of outcome
demonstrates equivalence.

* New* Obviously you don't understand graceful degradation then, because
it has nothing to do with computation, you fool! It has to do with resilience
of hardware functioning, parallel systems can re-route to avoid damaged
pathways, yet still retain optimum computation despite damage. If a
serial CPU is damaged it stops working or goes bonkers ...... *end new*

> All you can get from improved serial is more speed ... but speed of
> calculation has absolutely zilch to do with sentience/ felt experience. My old
> z80 is slower by a magnitude than a P4, but is not a jot more or less
> sentient!

While I'm not necessarily disagreeing with you, can you prove that your P4
isn't more sentient than your Z80? What is your criteria?

* The onus is on you to demonstrate that either has the slightest glimmer
of consciousness. I argue that neither have any, and won't in the future,
however fast you clock them. *end new*

Silicon is just the computational machinery; the software is way more
important. The human brain is what would be considered software implemented
in hardware. It doesn't have to be that way but nature apparently has
difficulty evolving proper von neumann machines.

*New* von Neumann machines are not evolvable systems. Software
requires outside helpers, programmers &c, MPS can LEARN.

Hope this makes it clearer for you James.
Best *end new*

>All you can get from improved serial is more speed=20

  Besides that Mrs. Lincoln how did you like the play?

>My old z80 is slower by a magnitude than a P4, but is not a jot =
>or less sentient!

  You seem very sure, how did you find out?

      John K Clark

* new* Given long enough the z80 could probably do anything the faster
P4 does ... both are baby Turing engines aren't they? And if you are
claiming that toasters, calculators, ordinary PC's and so on have
self-referential mentation in the very same way that you do, then you
must be a very limited sort of being! I don't consider that they do, but
accept that my cat IS sentient .... I have no parts of a brain she does
not have (just large prefrontal cortex) ... the cat has REM, so probably
has dreams ... this counts for me as sentience. P4 is just a lump
of sand & metal with no more feeling than sand on the beach.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:35 MST