From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Mar 09 2003 - 14:03:59 MST
--- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
> Robert J. Bradbury wrote:
> > While Eliezer's position is I believe partially
> correct -- in that the
> > sooner we get the Singularity the better -- at
> least from some positions
> > in that I think he assumes you get a friendly AI,
> while I'm concerned
> > that before then we might get a rogue AI
>
> That was the old Eliezer. I finally did work out
> the theory that
> describes what it is going from point A to point B
> when a moral human
> builds a moral AI, and it turns out that if you
> build an AI and you don't
> know exactly what the hell you're doing, you die.
> Period. No exceptions.
You have a *chance* of dying. You could, totally by
accident, wind up doing the right thing anyway. This
is not the same thing as guaranteed death.
> Do you have any ideas for dealing with this besides
> building FAI first?
> Because as far as I can tell, humanity is in
> serious, serious trouble.
Yes. Build an AI that you can trust, even if it
quickly goes beyond your control. This is not exactly
the same thing as FAI, since it allows for things like
uploads of human beings: the original human was never
completely by your design (unless, of course, it's you
yourself, but that screws everybody else from that
perspective), but if you can trust someone else to be
gifted with the capabilities that an AI would have...
This archive was generated by hypermail 2.1.5 : Sun Mar 09 2003 - 14:10:56 MST