Re: Why would AI want to be friendly?

From: Michael S. Lorrey (retroman@turbont.net)
Date: Tue Sep 26 2000 - 13:03:30 MDT


Franklin Wayne Poley wrote:
>
> On Mon, 25 Sep 2000, Michael S. Lorrey wrote:
>
> > "Eliezer S. Yudkowsky" wrote:
> > >
> > > Franklin Wayne Poley wrote:
> > > >
> > > > I have given hundreds of IQ tests over the course of my career and
> > > > participated in the development of one of them (Cattell's CAB). If I were
> > > > to measure transhuman-machine intelligence and human intelligence; and
> > > > compare the profiles, how would they differ?
> > >
> > > The transhuman would max out every single IQ test. It is just barely possible
> > > that a mildly transhuman AI running on sufficiently limited hardware might
> > > perform badly on a test of visual intelligence, or - if isolated from the
> > > Internet - of cultural knowledge. A true superintelligence would max those
> > > out as well.
> > >
> >
> > I don't know. Could you not utilize response times on each question to extend
> > the range of estimation? Perhaps limiting the response time would help as well.
>
> Please elaborate. I don't know what you are getting at.

Well, having taken many IQ tests, and having played chess against the computer
often, where I found I won more often when I limited the amount of time the
computer could think about its next move, that either counting the amount of
time a subject takes to answer a question, or else limiting the amount of time
the subject can take for each question, should allow us to measure the IQ of a
person who can ace any IQ test if given indefinite amounts of time to finish it.
Each progressively harder question takes more time for a subject of a given
intelligence to answer.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:12 MDT