RE: Hofstadter Symposium [was Re: it was all a gag]

From: Billy Brown (bbrown@transcient.com)
Date: Tue Apr 04 2000 - 19:29:25 MDT


Eliezer S. Yudkowsky wrote:
> Billy Brown wrote:
> > In theory it could be, if you had the storage, but the result would be
> > useless. What good is a human-level AI that runs 10^6 times slower than
a
> > real human?
>
> A great deal of good. It can rewrite its own source code, however
> slowly. It can use the conscious thoughts as a brief adjunct to mostly
> autonomic processes. It can run a lot faster on next year's hardware,
> or convince IBM or some funder to buy you a lot more of this year's
> hardware. You can be sure that at least you've solved the problem of
> intelligence. And you can write up a Sysop seed and put it on a CD and
> hand it to Eric Drexler and ask him to run it on the first nanocomputer,
> thus reducing the window of vulnerability.

If we were talking about 1 or 2 orders of magnitude I would agree, but six?
A ten-minute conversation with the AI would take almost 20 years! It would
take many decades just to figure out whether it was actually
human-equivalent or not, and even with the AI advantage it is far slower
than any human worker.

I think 2-3 orders of magnitude is the limit of usefulness even for a
research tool. If it runs slower than that, you're better off ignoring it
in favor of more tractable systems while you wait for faster hardware to
come along.

Billy Brown
bbrown@transcient.com



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:03 MDT