Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Sep 25 2000 - 02:46:40 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > "J. R. Molloy" wrote:
> > >
> > > You'd need the most talented leadership in the entire world, because the entire
> > > world is more interested in Olympic games and making money than it is interested
> > > in the most important job. The most important job appeals only to the most
> > > intelligent and conscientious brains.
> >
> > Amen. You first need to convince a sufficient number of people that
> > your diagnosis of the most important thing really is the most important
> > and the only hope and that your design is fundamentally sound. This is
> > not a trivial task and no, not all people of sufficient caliber to be
> > useful to the work will get it from the first.
>
> Yes, I used to think that way. I can remember when I used to think that way.
> I can remember when the possibility of losing even one person was so horrible
> that it overrode everything else in the mental landscape. But we don't need
> every single person of sufficient caliber. Let's say that the raw info will
> get 80% of the PoSCs, and nicey-nice phrasing will get 85%. Is the difference
> really all that significant? Is it worth a 300% increase in authorial time
> expenditure?

No, if they are that close. But if the numbers are more like 1% versus
say 60% then I would spend the extra time.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:53 MDT