From: Michael M. Butler (mmb@spies.com)
Date: Sun Mar 09 2003 - 15:35:46 MST
Adrian Tymes <wingcat@pacbell.net> wrote:
> --- "Eliezer S. Yudkowsky" <sentience@pobox.com>
> wrote:
>> Robert J. Bradbury wrote:
...
>> > while I'm concerned
>> > that before then we might get a rogue AI
...
>> I finally did work out
>> the theory that describes what it is going from point A to point B
>> when a moral human builds a moral AI, and it turns out that if you
>> build an AI and you don't know exactly what the hell you're doing, you
>> die. Period. No exceptions.
>
> You have a *chance* of dying. You could, totally by
> accident, wind up doing the right thing anyway. This
> is not the same thing as guaranteed death.
And the combinatorics could wind up resulting in a likelihood smaller than
the quantity 1 over the number of seconds that have elapsed since the Big
Bang. That's not the same as guaranteed death, but it's the way to bet if
those are the odds. So are odds of 1 in 100.
Adrian, just for my couriosity's sake, how do you figure
the odds? I'm not just asking that question rhetorically.
Against that, I think a case can be made that, for issues as hairy as this,
Bayes isn't applicable in a vacuum; we don't get to refine our estimates
over multiple trials that include catastrophe--maybe the human race does
(if
we luck out), maybe not, but probably not we here assembled. Conflict
levels
of 11 (that's 10^11 casualties--hundreds of millions dead) /or more/ also
surely change the weight.
>
>> Do you have any ideas for dealing with this besides
>> building FAI first? Because as far as I can tell, humanity is in
>> serious, serious trouble.
>
> Yes. Build an AI that you can trust, even if it
> quickly goes beyond your control.
...
> perspective), but if you can trust someone else to be
> gifted with the capabilities that an AI would have...
I agree with what you're saying but (and perhaps this is also what you're
saying) the only way I can see to apply the level of trust I give to a
person is to guarantee the physical limitations and the
developmental psychological ground of an actual human.
I wonder how likely this is, given the fraction of people with technical
chops that would dismiss this out of hand as [quasi]paranoid.
We internet-enable hot tubs, for goodness' sake. So the first aspect
(physical limitations) is likely to be ignored in some way by even the
"embodied consciousness" folks.
MMB
This archive was generated by hypermail 2.1.5 : Sun Mar 09 2003 - 15:40:55 MST