Re: Future Technologies of Death

Anders Sandberg (asa@nada.kth.se)
01 Jan 1998 18:07:29 +0100


"Martin H. Pelet" <tbm@cyrius.com> writes:

> When AI systems will be available, you will of course have the tools
> to read their whole minds directly, which would solve the problem above,
> but this method would violate their rights.

Not necessarily. If you start to examine my innards without
permission, it would be a violation of my rights. But I can give a
doctor at least a temporary permission to examine my internal state,
and that does not violate my rights. So if you were an AI who wanted to
demonstrate your responsibility you could decide to temporary grant
the examinator access to your mind-states.

> Moreover, a responsibility test performed today would not give you any
> certainty that the person would not turn bad in a year or so because of
> certain influences.

Note that we are not looking for responsibility here, but the ability
to understand and desire rights, with the responsibilities they
entail. There are plenty of people around who do not use their rights
(like free expression) in any useful way, but most of us would say it
is wrong to remove these rights unless they misuse them significantly.

> Finally, if such a test came into existence, who would define what
> would be ethically acceptable and who would assure that the rules of
> the test would be adapted according to changes in the ethical standard?
> Would it even be possible for the test not to lag behind the ethical
> standard?

These are important implementation issues. I'm not sure about the best
way of dealing with them, maybe a regular revision explicitely open
for general critique?

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y