Artificial Companions

Eliezer S. Yudkowsky (sentience@pobox.com)
Fri, 12 Mar 1999 02:31:39 -0600

(Let's at least keep the SUBJECT lines clean enough that a 12-year old can read this without Mom noticing anything wrong, okay?)

Billy Brown wrote:
>
> Of course, this raises an important moral issue. Any reasonably functional
> sexbot is already going to be at least as smart as your cat. Once you start
> adding speech comprehension, personality software, domestic skills, and
> other cognitive functions you're getting uncomfortably close to having a
> real person - and different people will draw that line in different places.

I disagree. A cat is a cat, even a cat with speech recognition, personality software, and domestic skills. (WHAT an idea! Probably more profitable than the ACs, if someone pulled it off.) In fact, the level of intelligence is more analogous to that of a spider.

The really ironic thing in most discussions of Artificial Intelligence is that the pundits treat emotions as a great mystery, and talk about AIs with human intelligence who still don't understand emotions. What rot! Dogs have emotions. Rats have emotions. Lizards have emotions. Emotions are easy. We'll have emotional AIs long before we can duplicate the simplest rational thought. We could make them right now, if anyone wanted them.

Personality interface? ELIZA, integrated with HEARSAY II, a Cyc DB, and a gigabyte archive of all the conversation from the most well-written erotic stories on the Internet. The AC won't be able to discuss Epictetus, but it will be able to respond to the vast majority of stunningly unoriginal comments with considerably more touching returns. All it really needs is a NL parser and Cyc DB that can figure out the analogy between something it hears and something it's stored. The ELIZA rules operate on keywords in the rough model instead of the speech, and the return volley derives from a vast database instead of being made up on the spot. When you consider how well the original ELIZA worked, how it was perceived as being human when it was just a terminal instead of pseudoflesh, there's plenty of tolerance for simply ignoring any ambiguous remarks.

How much actual intelligence does this require? Answer: None. There is no component here which is capable of original thought. At most, the sophistication would equal that of the Structure Mapping Engine or Cyc; it would fall short of Copycat or EURISKO.

The reflexes are undoubtedly quite complex, but while the present ambient "mode" of reflexes might be linked to both senses and speech command acceptance, the complexity of the reflexes themselves would not be integrated with the personality front. The reflexes might involve a complex, active model of the user, but would it really be necessary to integrate this with the conversational model? Maybe a set of simple heuristics would cross-link the two; it still wouldn't be intelligence.

In short, the AC really would be operated by emotions, meaning that it would have a set of "modes" that could trigger complex behaviors and be triggered by complex rules, but all the communication between modules would be through the modes. A really sophisticated AC might even have a modular emotional system with several tones (*) that can operate independently and simultaneously impose different behaviors that would be automatically integrated. But this is really over-complicating things up to the "lizard" level; the "spider" level of discrete, non-reduceable modes should be quite sufficient.

(*) = See the definition under:
http://tezcat.com/~eliezer/alger_non.html#counter

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.