[Robot-for-President] Re: Behaviourism / Cognitivism (fwd)

From: Party of Citizens (citizens@vcn.bc.ca)
Date: Fri Oct 19 2001 - 20:58:24 MDT

||||| The capacity exists now to construct a machine with superhuman "g",
superhuman knowledge and superhuman learning ability. By the 2012 US
Election the real power in the Whitehouse will be a robot |||||

---------- Forwarded message ----------
Date: Fri, 19 Oct 2001 16:24:45 -0700 (PDT)
From: Party of Citizens <citizens@vcn.bc.ca>
Reply-To: Robot-for-President@yahoogroups.com
To: John Penner <johnrpenner@earthlink.net>
Cc: robot-for-president@yahoogroups.com
Subject: [Robot-for-President] Re: Behaviourism / Cognitivism

On Wed, 10 Oct 2001, John Penner wrote:

> wayne, could i sumarise your thesis in the following points?
> 1) THIS approach isn't concerned with the 'how', or the
> PROCESS which arrives at routines which define behaviour.


> (even though we have the ability to know HOW - through
> computer science - we won't worry about it - we'll use
> Behaviouristic principles anyways).

You can know all kinds of things about how computers or robots work, but
that is irrelevant to the "black box" approach of behaviouristics. What is
relevant is whether the behaviouristics of the human can be matched or
simulated by the behaviouristics of the machine. Thus I used the example
of 2 and 3 D mine mapping because I used to be a mine mapper at Discovery
Mine, NWT, and because I can extract every kind of test for 2 and 3 D
spatial ability from the mine maps. What goes on inside a human mine
mapper? Maybe we won't know all of it for 1,000 years. But right now we
know that a host of questions/problems in this domain can lead to a host
of answers/solutions when presented to a competent human. When I see those
outlines sketches from CAD programs on ed tv they look just like the mine
maps at Discovery. I then reasonably think the behaviouristics of the
computer or robot with CAD and lots of Q-A sets built in as well has a
pretty good chance of simulating what the human mapper does. I don't have
to know any more about computers than any layman to draw that conclusion.
If there are hardware or software obstacles to that I'd like to hear about
them. Thus you can go across the profile of human mental abilities,
category by category (choose your own if you don't like mine) and draw
your own conclusions. I would hope others interested in AI would do this
and let me know. At each point in the profile, do you expect that SI>HI or
HI>SI? Why?

> 2) IF humans have intelligent thinking 'process', and then codify
> (i.e. 'programme') a machine with the PRODUCTS of their
> intelligence, and the machine can recapitulate that behaviour
> in response to a given set of input conditions, than
> it matters not if machines have PROCESS, since for a
> 'pragmatic' approach, it is PRODUCT that matters; and
> machines are superb at reproducing PRODUCT once the process
> has been worked out for them (by humans - i.e. 'programmers').

That's about it. Process can be whatever it is for mice, men and machines.
Behaviouristics gives you an algorithm for everything manifested in this
world. At the very least everything manifested in this world is recordable
on a nominal scale. Those "behaviouristic algorithms" spell out the S-R
connections for us and THAT is all we need to deal with, not the process.

> 3) Based on Behaviouristic Principles (i.e. observable, measurable
> PRODUCTS of behaviour), you wish to demonstrate that human
> products of intelligence codified into machine behaviours
> (i.e. 'programming') can exhibit higher speeds of execution
> in the PRODUCTS of Intelligence than humans can implement
> themselves. Therefore, it could be said from the standpoint
> of Behaviourism that within a given range of observable
> behaviour, and certain narrow measurable attributes, SI > HI.

As far as I can tell SI>HI overall, yielding higher "g", general
intelligence, for machines. That's quite broad IMO, not narrow. If you
disagree with me please select your set of HI categories, and then tell me
how you think machine products will compare in each case. Can the
correctly programmed machine give us the Q-A, Problem-Solution sets better
(faster and more accurately) than the human?

> 4) These products of human intelligence that are exhibited in
> the programmed behaviour of machines COULD be mistaken for
> human behaviour (since the effort of the humans creating them
> is actually to mimick the human behaviours),

Yes, depending on how they are presented to the user (eg anthropomorphic
machine or not).

 and thus to the
> unwitting public - stir up their sympathies, and cause people
> to attribute anthropomorphic qualities to the robots. And
> because of this, people will be clamouring for machine rights,
> since they won't be able to distinguish the robot behaviour
> from the human behaviour, because in 'all the observed
> categories of behaviour, it would act and respond in
> the same manner as a human (i.e. 'Robo-President').

Sure. Eventually this is likely. I not alone in saying so. "If it looks
like a duck...."

> 5) the measurable attributes in their totality somehow WILL
> constitute a greater essence, SINCE 'Robo-President' WILL
> outperform humans in ALL aspects (enough to govern).

Essence is not very behaviouristic. It is the SI behaviour which I expect
is greater than HI behaviour in totality, ie higher "g". And yes, I think
the knowledge exists now which could lead to an R4P if 10 years and a few
billion go with it. But hey, maybe private and secret projects already
have one.

> 6) Since people will naturally ONLY follow the Behaviouristic
> standpoint,

They will see what they see.

 and since SI > HI in observed measurable behaviours.
> The 'natural sympathy evoked' in people for machines that
> respond with the same 'output' in response to a certain set
> of 'inputs' (i.e. 'just like a human') -> from this, it
> follows that Machines should be granted Human Rights.

I don't say they should. But I expect others will say that. "If it looks
like a duck...."

> 7) WHEN they ever get Quantum Computers to work, then maybe we'll
> *really* have a 'conscious sentient' machine.

IF they get quantum computers to work well, what I've read about quantum
effects causes me to think this will push humankind further in the
direction of PERCEIVING machines as conscious or sentient.

> if these do not correctly encapsulate your views in an acurate
> fashion, please give us the nutshell yourself. :-)

They are close enough to be very helpful for discussion. Thank you for
taking the time to spell them out.


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get your FREE credit report with a FREE CreditCheck
Monitoring Service trial

          *** The Era of Total Automation is Now ***

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:14 MDT