Samantha Atkins wrote:
> "Eliezer S. Yudkowsky" wrote:
> > Actually, I've lately been wondering whether thoughts really need to be
> > implemented as conceptual structures, or if that's just a human thing
> > derived from the coevolution of consciousness and communication. If the
> > latter, it won't be easy to make the AI's stream of consciousness
> > human-readable.
> If thoughts are not implemented as conceptual structures then how are
> they implemented?
I don't know; that's what scares me. <grin>
Okay, that wasn't the response you were looking for... what got me onto
that track was thinking about our mental stream-of-consciousness, and
where the sentences come from in the first place, and what the difference
is between thinking up the sentence and actually enunciating it
internally. Plus at least one person on SL4 saying that they had learned
to think without enunciating a stream of consciousness, which is something
that I once tried to do but hadn't even begun to make progress on. So I
started to wonder whether the full complexity was really necessary.
> Also please give your working definition of
In humans, the running dialogue that forms our stream of consciousness.
Formed by taking symbols - nouns, adjectives; content creators and content
modifiers - mutually targeted in such ways as to form coherent structures
that load coherent visualizations into working memory.
In AIs, the highest level of description of cognitive content - the one
that implements the business of designing and analyzing and planning and
so on, as seen on CaTAI section 2.4, "Thoughts". I initially thought this
was going to work the same way as in humans; now I'm no longer sure this
is the simplest possible implementation, but I'm still planning to do it
the human way unless something concretely simpler comes along.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:39 MDT