Re: Jaron Lanier Got Up My Shnoz on AI

From: Steve Nichols (steve@multisell.com)
Date: Wed Jan 16 2002 - 12:13:01 MST


Date: Tue, 15 Jan 2002 23:12:00 -0800
From: James Rogers <jamesr@best.com>
Subject: Re: Jaron Lanier Got Up My Shnoz on AI

>Parallel and serial processing are algorithmically equivalent, with
parallel
>processing being the more restricted of the two. The only qualitative
>difference is that massively parallel processing has transaction theory
>issues that make it less capable than serial processing for some types of
>work. In the best case scenario, you get an n-fold speed-up of the serial
>algorithm. (For a couple reasons having to do with how real hardware
works,
>you can actually have super-linear scalability for a very miniscule set of
>problems, but that is neither here nor there.) What is an example of an
>algorithm that is qualitatively different on a parallel system versus a
>serial one?

The point is that there ARE NO ALGORITHMS in the brain. This
"symbolisation" doesn't exist in nature, and in massively parallel
DISTRIBUTED systems (not transputers or just multiple von Neumann
processors) no algorithms or programs are fed into the system.

>In this case, it is essentially an argument from ignorance, as both of us
>are missing fundamental information.

Speak for yourself!

>There is literally no practical
>difference between hardware and software. Obviously, though I don't see it
>like you do either. Consider the following bits of information:

Not only aren't there any 'programs' in mpd systems (just weight states!)
but is also the case that internal representations (of back-propagation
machines, especially combined wiv vector quantisation & learning
reinforcement or IAC) cannot be meaningfully analysed. We just have
input and output results (plus math models of the ARCHITECTURE).

1) to 3) irrelevant!

>Assuming that these three facts are true, then it is necessarily possible
to
>express all tested aspects of the mind on an ordinary computer. With
>sufficient software capability for adaptive algorithm construction (a hard
>software problem, but a solvable one), you could clone every aspect of the
>human mind that behaves like an FSM. At what point does the software
>running on an ordinary computer fail to be sentient, when it has reached
the
>point of being indistinguishable from the human? The only real out
provided
>is the possibility that some part of the mind is not a piece of FSM, but
>hat is looking pretty unlikely at this point.

Sorry James, but 'ordinary' von Neumann machines will never be sentient.

Forget finite-state hardware ... the distinction between virtual (phantom
in MVT parlance) and "physical" (either E1-brain or silicon) is crucial
in this debate since we are discussing 'felt' experience, not computation.

First you have to have solved the mind-body problem in more general
terms so as to overcome Leibnitz' objections to Dualism. MVT solves
these theoretical questions ... no other approach comes close, nor
ever will! Sorry if you "don't see it" like me ... I look at things from
my post-human perspective ... let's see which of us comes up wiv the
goods first! My main concern is wiv mental health & diagnosis, I think
there is plenty of under-utilised humanoid intelligence, and we should
use MVT mainly to boost our own levels of intelligence and sentience.

The Posthuman.Org welcomes genuine neural net AI research associates
to assist with our program for MVT-based machines, but please contact
me off-list, there really isn't much point of contact between MVT & old
era theoretical world views, and I don't want to clutter this list with
arguments. Go ahead with your finite-state efforts for sure though, prove
me wrong (ha).

www.steve-nichols.com Posthuman since the 1980's
******************
Shogi - - the only board game recognised as a martial art!
www.shogi.co.uk
******************



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST