altamira wrote:
>
> Michael LaTorra wrote: "But it may improve impossible to keep a leash on
> AI's."
>
> Why would a person WANT to keep a leash on AI's? Would it be rational for
> the less intelligent to control the more intelligent? Could the superhuman
> intelligence (whatever form it might take) function if it were fettered?
>
> Suppose the first AI is created with a meta-constraint to do no harm to any
> human and to build this same constraint into every succeeding AI that the
> first AI might devise.
Which, in itself, is a leash. Quick-and-dirty AIs, intended for
limited functionality where the perceived function space excludes
anything that could possibly harm humans anyway, could do without this,
presumably at some minor benefit to processing speed/computational
resources required. But, what happens when said AIs are then adapted
(again, as the cheapest and most expedient means available) to other
tasks, still with the slight speed benefit and thus outcompeting AIs
with that constraint?
Leashes can be broken, if there is some benefit to the one who breaks
them. Better to work with the AIs so that they know it to be in their
own benefit not to kill us, than to be lazy and rely on an imperfect, if
simple, barrier to what AIs may do.
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:11 MDT