Billy Brown wrote:
>... Humans have more than one system for "what do I do next?" -
>you have various instinctive drives, a complex mass of conscious and
>unconscious desires, and a conscious moral system. When you are trying to
>decide what to do about something, you will usually get responses from
>several of these goal systems. ...
>In an AI, there is only one goal system. ... There is no 'struggle to
>do the right thing', because there are no conflicting motivations.
You can you possibly know this about AIs? I know of a great many programs that have been written by AI researchers that use conflicting goal systems, where conflicts are resolved by some executive module. Maybe those approaches will win out in the end.
email@example.com http://hanson.berkeley.edu/ RWJF Health Policy Scholar FAX: 510-643-8614 140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884