> > "Eliezer S. Yudkowsky" wrote:
> > >
> > > Actually, I've lately been wondering whether thoughts really need to
be
> > > implemented as conceptual structures, or if that's just a human thing
> > > derived from the coevolution of consciousness and communication. If
the
> > > latter, it won't be easy to make the AI's stream of consciousness
> > > human-readable.
I'll wager it's the latter. Since when has human-readability bothered you?
:)
(Actually that wasn't a poke at your writing style (although I could do with
a dedicated cortex for that), more your preference for a codic cortex over a
visual cortex, which I once argued was easier to debug (for humans)).
> Samantha wrote:
> >
> > If thoughts are not implemented as conceptual structures then how are
> > they implemented?
>
> I don't know; that's what scares me. <grin>
>
> Okay, that wasn't the response you were looking for... what got me onto
> that track was thinking about our mental stream-of-consciousness, and
> where the sentences come from in the first place, and what the difference
> is between thinking up the sentence and actually enunciating it
> internally. Plus at least one person on SL4 saying that they had learned
> to think without enunciating a stream of consciousness, which is something
> that I once tried to do but hadn't even begun to make progress on. So I
> started to wonder whether the full complexity was really necessary.
I'll add myself as another data point. I sometimes think (and code) without
an internal conversation too. Perhaps my language centers are completely out
to lunch - I admit I'm not a terribly good verbal communicator.
Now, if I were to throw a ball to you unexpectedly, you'd probably catch it
without any internal dialog. Language isn't necessary for intelligent
behaviour. It certainly helps, but it's only one way of thinking. Visual
thinking is another.
I remember seeing my little nephew solve a problem before he could talk. He
wanted to play with the water in the kitchen sink but couldn't reach, so he
remembered where he'd seen a footstool and brought it into the kitchen. At
no point did he use language to solve that problem, yet his behaviour was
intelligent.
Chimps do relatively well without language too.
>
> > Also please give your working definition of
> > "thought".
>
> In humans, the running dialogue that forms our stream of consciousness.
> Formed by taking symbols - nouns, adjectives; content creators and content
> modifiers - mutually targeted in such ways as to form coherent structures
> that load coherent visualizations into working memory.
Don't assume language primitives are the only objects to play with in
conscious thought.
For instance, think about how you open various items of food packaging.
You're constantly solving lots of tiny puzzles from how to hold the object,
how too grip, where to pull or tear, and when to get some scissors or use
your teeth - all without langauge.
These are conscious thoughts too.
> In AIs, the highest level of description of cognitive content - the one
> that implements the business of designing and analyzing and planning and
> so on, as seen on CaTAI section 2.4, "Thoughts". I initially thought this
> was going to work the same way as in humans; now I'm no longer sure this
> is the simplest possible implementation, but I'm still planning to do it
> the human way unless something concretely simpler comes along.
I still think you're taking a too high-level approach. I believe there *is*
a much simpler implementation, although I wouldn't dare use such a word as
'concrete' until I have the code running.
BTW I urge you to begin coding and resist calls to write a book. Working
code says far more. In my experience, sometimes it's just best to dive in,
rather than try to plan it all perfectly. Failure is fine, it's part of
progress. You'll end up knowing far more about why it didn't work for the
second attempt.
Cheers,
Dale Johnstone.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:39 MDT