RE: Singularity: AI Morality

Billy Brown (bbrown@conemsco.com)
Mon, 7 Dec 1998 10:26:53 -0600

Eliezer Yudkowsky wrote:
> We should program the AI to seek out *correct* answers, not a particular
set
> of answers..

> > but let's not go into that
> > now..

> Let's. Please. Now..

If we are going to write a seed AI, then I agree with you that it is absolutely critical that its goal system function on a purely rational basis. There is no way a group of humans is going to impose durable artificial constrains on a self-modifying system of that complexity. However, this only begs the question - why build a seed AI?

More specifically, why attempt to create a sentient, self-enhancing entity? Not only is this an *extremely* dangerous undertaking, but it requires that we solve the Hard Problem of Sentience using merely human mental faculties.

Creating a non-sentient AI with similar capabilities would be both less complex and less hazardous. We could use the same approach you outlined in 'Coding a Transhuman AI', with the following changes:

  1. Don't implement a complete goal system. Instead, the AI is instantiated with a single arbitrary top-level goal, and it stops running when that goal is completed.
  2. Don't try to implement full self-awareness. The various domdules need to be able to interface with each other, but we don't need to create one for 'thinking about thought'.
  3. Don't make it self-enhancing. We want an AI that can write and modify other programs, but can't re-code itself while it is running.

The result of this project would be a very powerful tool, rather than a sentient being. It could be used to solve a wide variety of problems, including writing better AIs, so it would offer most of the same benefits as a sentient AI. It would have a flatter enhancement trajectory, but it could be implemented much sooner. As a result, we might be able to get human enhancement off the ground fast enough to avoid an 'AI takes over the world' scenario.

Billy Brown
bbrown@conemsco.com