RE: Singularity: AI Morality

Billy Brown (
Tue, 8 Dec 1998 10:04:09 -0600

On the topic of non-sentient AI:

I'm not suggesting that we never build a sentient AI (eventually someone will), but rather that we take some care regarding the order in which we build its components. Start with a few low-level components, and generalize from there.

Even a very limited code-writing program would be a useful thing to have
(look at the wizards in MS Access, for example). A few steps beyond that
you get a coding domdule, which would be the most powerful programmer's tool ever written. Not only does this make the rest of the programming easier, but selling it would bring in enough money to finance a real research project.

At this point we really don't have anything above Notice-level functions, but we have a narrow enough field of competence that we can actually code everything by hand. The next step is to start coding Understand-level functions, which will require a more robust symbolic architecture and a lot more data. It will probably take several major revisions (and several years of effort) before the program is good enough that we could even contemplate turning it into a seed AI.

This is the point where I intended by original comments to be applied. Instead of trying to start an open-ended self-enhancement process, we continue to rely on a controlled process where we decide on a particular improvement, tell the AI to write it, then add the resulting code to the AI. Most of our design efforts would be focused on expanding the AIs fields of competence to as many scientific and engineering disciplines as possible, rather than trying to take those final steps to full sentience.

Now, we can't pursue this course indefinitely, because eventually we'll make a sentient version without meaning to. However, we'll be in a pretty good position to evaluate that risk once we've gotten to that point. Meanwhile, we have a period of (probably) a decade or so in which we have given a weak form of the AI advantage to anyone who has a copy of the AI.

One final note - if the timing works out, and it looks like it might, we may be able to use neural interface technology to integrate selected parts of the AI with a human mind. That would give us a path to powerful self-enhancement well before uploading will be possible, and maybe even before we could have gotten a sentient AI working. That is actually the scenario I am hoping for here, because it means that our fate doesn't end up in the hands of one individual.

Billy Brown