Fwd: Re: Sincere Questions on Identity

From: Bob Arctor (curious@hyperreal.pl)
Date: Sun Dec 16 2001 - 16:47:35 MST

---------- Forwarded Message ----------

Subject: Re: Sincere Questions on Identity
Date: Sun, 16 Dec 2001 12:13:04 +0100
From: root <root@grzybnia>
To: extropians@extropy.org

> Michael - Do you believe that computer programs can be
> conscious? This is not clear from your message.

computers are only capable of metabolisation of information,
of course can be consious, but because they're nonrelative to
anything they can't handle 'identity' or any feeling of self -
can only be consious of personality data. they don't
distinguish between.

i thin a set of autosophers and time-comparing machines would
be more suitable for carying consiousness, and will be more
aware, as computers are dead-end technology.

the perception of self is based of percpetion of unique
molecules, i can't say anything more without further research,
but this is rather a matter with model:

linear perception (sensors) > classyfing autosopher (including

the output of autosopher is put to $perception

then it goes like that :

$perception > autosopher teached to recognize A >

> autosopher teached to recognize B > comparator >

comparator > cognitive feedback

> imaginatory feedback
> predicting feedback (imagination in time)
> computing feedback (logic - exclusion)
> subtraction feedback

feedback is changing $perception on channel defined (paraell)

comparator is then sure that $perception is A or B ,
the will appears when external signal forces comparator to
detect i.e. A (be sensitive to A) .

such machine should be fully consious of SELF if any autosopher
will contain definition of SELF.

if there is new fact percieved (from subtraction feedback) new
autosopher must be created and connected to $perception and
comparator node.

BUT this machine is only consious, it do not feel anything.



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:27 MDT