Re: Jaron Lanier Got Up My Shnoz on AI

From: James Rogers (jamesr@best.com)
Date: Mon Jan 14 2002 - 10:13:16 MST


On 1/14/02 7:35 AM, "Steve Nichols" <steve@multisell.com> wrote:
> J.R. Molloy wrote:
>> It looks as though Lanier confuses intelligence with sentience. We already
>> have AI, as reported by John Koza almost two years ago in _Genetic
>> Programming and Evolvable Machines_, Volume 1, Number 1/2 (ISSN: 1389-2576).
>> Self-awareness, or sentience, is an epiphenomenon that emerges from massively
>> parallel computational complexity such as the human brain engenders.

While sentience may be emergent, massive parallelism will have nothing to do
with it. The brain is massively parallel because it was convenient in the
evolutionary scheme of things, not because it is of intrinsic importance to
sentience. Anything doable in massive parallelism is doable on a serial
(i.e. "less massively parallel") processor.

 
> There is no evidence for emergentism, and the philosophical case for
> epiphenomenalism is weak at best. Complexity does not equate to infinite-state
> (self organising circuitry) since finite-state, hard wired systems can be
> equally complex. Sentience, abstract thought, is only possible once a circuit
> has lost its external clock (primal eye) and become analog(ous to
> infinite-state).

I agree that complexity does not equate to infinite-state, but I don't see
where J.R. was saying that it does. However, saying that abstract thought
is only possible on clockless logic seems to be just plain wrong. Even with
clockless logic computation doesn't happen by magic, but what I really don't
understand is where the mathematics even requires it or treats it as
different from clocked logic. Some elucidation would be useful.

BTW, the link following that paragraph didn't seem to work.

Cheers,

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST