Eugene Leitl writes,
> How can you own something which is significantly smarter than you?
To help explain this, let me recite a few statistics:
25% of all married men kiss their wife goodbye when they leave the
house.
Of these same men 90% will kiss their house goodbye when their wife
leaves.
Smart but poor people go to work for rich dumb people all the time. Henry Ford
(or was it Edison?) reportedly said that the key to his success was hiring
people who were smarter than him.
> I fail to see the logic. You're describing a fantasy creature,
> incredibly powerful as docile. Even djinns are that not.
Yes, AIs are fantasy creatures, because they don't exist (yet). My fantasy
creature differs from yours in that mine has intelligence which makes it docile
and cooperative, wheras yours wants to use my atoms to increase its memory. So
I'm imagining a Buddha in the robot, and you're imagining an ogre in the robot.
Are we both projecting inner feelings onto our fantasy creatures?
> Unfortunately, the method which makes the AI powerful automatically
> makes it less than friendly. If it's friendly, it's less than useful.
I don't see that at all, perhaps because I have no use for unfriendly genius.
Then again, the perfect intelligent robot would be friendly to its owner and
unfriendly toward others, rather like a guard dog.
> Because I can't think of any. And I'm trying rather hard. Because so
> much is at stake I'd rather have lots of iron-clad reasons why AIs
> will wind up being friendly instead of the other way round.
So, what is your position? You think roboticists should be forbidden to make
robots that are too smart?
--J. R.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:21 MDT