Re: Jaron Lanier Got Up My Shnoz on AI

From: James Rogers (jamesr@best.com)
Date: Wed Jan 16 2002 - 00:12:00 MST


On 1/15/02 8:50 AM, "Steve Nichols" <steve@multisell.com> wrote:
> Date: Mon, 14 Jan 2002 09:13:16 -0800 From: James Rogers <jamesr@best.com>
> Stuff can be SIMULATED in serial (cos the mathematics seems to boil down to
> statistical mathematics) but this doesn't mean quality of massively parallel
> and serial are IDENTICAL, so maybe for real-time speed serial won't work.

Parallel and serial processing are algorithmically equivalent, with parallel
processing being the more restricted of the two. The only qualitative
difference is that massively parallel processing has transaction theory
issues that make it less capable than serial processing for some types of
work. In the best case scenario, you get an n-fold speed-up of the serial
algorithm. (For a couple reasons having to do with how real hardware works,
you can actually have super-linear scalability for a very miniscule set of
problems, but that is neither here nor there.) What is an example of an
algorithm that is qualitatively different on a parallel system versus a
serial one?

 
> No, JR wasn't saying that, but BRAINS are self-organising in hardware, not
> just soft-programmable. Computers aren't sentient, E1-brains are! I am not
> saying absolutely that sentience/ abstract thought is only theoretically
> possible on clockless logic (lost clock! Not even designed clockless) ... but
> you can't argue that it has only ever be obseved on this type of circuit!!!!
> Mathematics is just analogy, a fiction ... I am more interested in the biology
> and evolutionary history of how the E1-brain actually works ... nets don't
> model very neatly into math anyway cos never reach perfect rest states.

In this case, it is essentially an argument from ignorance, as both of us
are missing fundamental information. There is literally no practical
difference between hardware and software. Obviously, though I don't see it
like you do either. Consider the following bits of information:

1) Mathematics describes extremely sensitive tests for "finite state
machine-ness". They positively determine whether a process is describable
on finite state machinery with high probability, though some extremely
complex FS processes can escape detection.

2) Any finite state machine is expressible on an ordinary computer, though
devising a practical expression may be very difficult. General solutions do
exist however.

3) The human mind never fails to test positive as being an extremely complex
piece of finite state machinery as described in #1. A many such tests have
been done on different aspects of the mind.

Assuming that these three facts are true, then it is necessarily possible to
express all tested aspects of the mind on an ordinary computer. With
sufficient software capability for adaptive algorithm construction (a hard
software problem, but a solvable one), you could clone every aspect of the
human mind that behaves like an FSM. At what point does the software
running on an ordinary computer fail to be sentient, when it has reached the
point of being indistinguishable from the human? The only real out provided
is the possibility that some part of the mind is not a piece of FSM, but
that is looking pretty unlikely at this point.

Cheers,

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST