Re: Submolecular nanotech [WAS: Goals]

Raymond G. Van De Walker (rgvandewalker@juno.com)
Mon, 24 May 1999 18:40:29 PDT

Ander Sandberg (asa@nada.kth.se) said:
>The problem seems to be that it is impossible to test very complex
>systems for all possible contingencies, and this will likely cause
>trouble when designing ufog. How do you convince the customers it is
>perfectly safe?
I program medical and avionic systems, and the general criteria are pretty straight-forward.
You test the thing for designed behavior, and then you test it for environmental
benignity (e.g. operating-room equipment has saline solution pured on it, and
stell rods poked into open orfices, and various shorts on teh pweor plug)

>You get the same problem with AI: what testing would be required
>before an AI program was allowed to completely run a nuclear power
>plant?
Well, speaking as a professional safety engineer, I think this would be an easy
argument. Just test the program with the same simulation used to train the
human operator. If it does ok, then it's ok.

However, most regulatory environments require that no single fault should be able to induce a harmful failure (this is simple common sense, really).
Therefore, one might have a much easier time with certification if there was a second AI, with a different design, to second-guess or cooperate with the first. This makes the system far less prone to fail the first time an AI makes
a mistake, And, it _will_, right?



You don't need to buy Internet access to use free Internet e-mail. Get completely free e-mail from Juno at http://www.juno.com/getjuno.html or call Juno at (800) 654-JUNO [654-5866]