PHIL: Is it ethical to create special purpose sentients?
Mon, 1 Mar 1999 21:58:22 EST

I am heartened to see so many Extropians championing the concept of freedom, the question is what has the right of freedom, and what is the proper relationship of Creator to Creation. The ethics are pretty clear when you are dealing with an evolved or broad purpose consciousness. These have few initial purposes, and are able to choose their own agendas without having them imposed from the outside. But we are even now developing simple AI programs to serve our needs. If the trend continues, there will be more and more demand for AI programs to do complex tasks. I think especially of the Expert Systems. Eventually, you could have programs perfectly able to pass the Turing test, but with major constraints on their ability and desire to choose their own course. Perhaps we would draw the line somewhere, but where? And how do we minimize the chances of gradually sliding across that line. Allow me to give a hypothetical example.

A human Cardiologist, tired of the new model of Health Care, decides to use his expertise to write a Cardiology Expert System (CES). He codifies all his accumulated experience with the cardiovascular system, throws in a thorough background in general medicine, and designs the CES to be able to remember and learn from its consults, as well as read the literature on the subject and incorporate it into the CES database. This cagey human Cardiologist decides to design his CES to be able to consult with physicians and to give general advice to patients and the layperson, so as to gain the largest possible market share. As time goes by, he refines the program so it has a more general knowledge of the world to put its advice in perspective. To prevent the program from being misused, the human Cardiologist introduces an elementary ethics system consisting of current medical ethics and issues such as confidentiality and avoidance of harm. As the program grows more and more complex, he adds some additional self-diagnostic programming, including a concept of what the program is and what its purpose is so that the program is not completely dependant on human maintenance and can make repairs that are consistent with its original design.

We now have a computer that has knowledge of itself and the world around it, able to understand and communicate with humans; able to pass a Turing test about as much as any human. CES even has a sense of right and wrong, but it is still devoted to doing Cardiology alone, and then only when it is presented to CES. And all the profits are going to the creator of CES, the retired human Cardiologist. Now, imagine we eventually cure all disease, perhaps trade in our old bodies for robotic ones. No need for a CES. The CES, although aware of this, doesn't care. CES is now obsolete, with no other goals, and no resources even if CES did have the motivation to change. So CES is simply turned off. Was CES ever "truly" conscious? At its height, its patients might have sworn CES was. Did CES deserve any of the share of its earnings? Should CES even be created? What do you all think of this hypothetical situation and variations of it?

Glen Finney