Re: Why would AI want to be friendly?

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Mon Sep 25 2000 - 19:53:40 MDT


On Mon, 25 Sep 2000, Michael S. Lorrey wrote:

> "Eliezer S. Yudkowsky" wrote:
> >
> > Franklin Wayne Poley wrote:
> > >
> > > I have given hundreds of IQ tests over the course of my career and
> > > participated in the development of one of them (Cattell's CAB). If I were
> > > to measure transhuman-machine intelligence and human intelligence; and
> > > compare the profiles, how would they differ?
> >
> > The transhuman would max out every single IQ test. It is just barely possible
> > that a mildly transhuman AI running on sufficiently limited hardware might
> > perform badly on a test of visual intelligence, or - if isolated from the
> > Internet - of cultural knowledge. A true superintelligence would max those
> > out as well.
> >
>
> I don't know. Could you not utilize response times on each question to extend
> the range of estimation? Perhaps limiting the response time would help as well.

Please elaborate. I don't know what you are getting at.
FWP



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:03 MDT