Eliezer Yudkowsky wrote:
> We should program the AI to seek out *correct* answers, not a particular
> of answers..
> > but let's not go into that
> > now..
> Let's. Please. Now..
If we are going to write a seed AI, then I agree with you that it is absolutely critical that its goal system function on a purely rational basis. There is no way a group of humans is going to impose durable artificial constrains on a self-modifying system of that complexity. However, this only begs the question - why build a seed AI?
More specifically, why attempt to create a sentient, self-enhancing entity? Not only is this an *extremely* dangerous undertaking, but it requires that we solve the Hard Problem of Sentience using merely human mental faculties.
Creating a non-sentient AI with similar capabilities would be both less
complex and less hazardous. We could use the same approach you outlined in
'Coding a Transhuman AI', with the following changes:
The result of this project would be a very powerful tool, rather than a sentient being. It could be used to solve a wide variety of problems, including writing better AIs, so it would offer most of the same benefits as a sentient AI. It would have a flatter enhancement trajectory, but it could be implemented much sooner. As a result, we might be able to get human enhancement off the ground fast enough to avoid an 'AI takes over the world' scenario.