Re: Contextualizing seed-AI proposals

From: Damien Broderick (d.broderick@english.unimelb.edu.au)
Date: Thu Apr 12 2001 - 19:31:36 MDT


At 07:06 PM 4/12/01 -0400, Jim Fehlinger wrote another fine summary, which
helps crystallise for me a central problem I've felt with Eliezer's posted
outlines (and much of the discourse on this list).

>Thus, in the view of neurobiologist Walter Freeman...
>"the brain [brings] the full weight of a lifetime's experience to
>bear on each moment as it [is] being lived. ...
>Consciousness is about bringing your entire history to bear on
>your next step, your next breath, your next moment." (p. 268).
>A linguistic exchange between two such "memory landscapes" relies
>for its power on the fact that the receiver can **recreate** a
>conscious state similar (enough) to that of the sender, rather
>than on the unlikely interpretation that the message somehow
>encodes all the complexity of the conscious state of the sender:

Just so, although the impact of the entire lifetime probably shrinks to an
endlessly revised penumbra. In my doctoral dissertation, which I wrote some
12 years ago, I took the view that Hopfield attractor models of
neural/mental function were the most plausible then available; the
cognitive theorists closest to this approach at the time were Eleanor
Rosch, Lakoff and Johnson, Jackendoff, Bruner, and later Calvin and Edelman
and Damasio and Walter Freeman. I feel a slight frisson of horror to recall
that this model eerily resembles the melting-wax-mould-like `memory
surface' suggested by that global posturer Edward de Bono in THE MECHANISM
OF MIND in 1969.

How to emulate this in a computer program using off the shelf components?
Well, back then people talked a bit about analogies with spin glasses and
annealing. Maybe they still do.

Damien Broderick



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:45 MDT