RE: Hofstadter Symposium [was Re: it was all a gag]

From: Billy Brown (bbrown@transcient.com)
Date: Tue Apr 04 2000 - 15:57:00 MDT


Mike Linksvayer wrote:
> I hadn't heard of the broadcast architecture before (I don't attempt to
> keep current with nanotech research, though hardly anyone in the
<snip>
> My intuition (and that's all I have on this point) doesn't
> find this one-sentence version of the broadcast architecture very
> compelling in terms of cost or danger.
<snip>
> On controllability, it
> seems that if nanobots can be broadcast instructions, then they, having
> security bugs, can be broadcast bad instructions.

There are two different applications of this idea that I am familiar with.
One is a strategy for implementing early assemblers - instead of trying to
build a full-fledged assembler with onboard control, you build a much
simpler device that can be controlled externally by an ordinary macro-scale
computer. This makes the problem of putting the very first assemblers
together a bit easier, by making the target machine less complex.

The other use is a form of specialization in more mature systems. Instead
of trying to make completely universal nanobots with lots of onboard
intelligence, you just build remote-controlled robots with a modest level of
built-in smarts. Then you can have a much more capable central system
(either microscopic or macro-scale) controlling their activities using one
of several broadcast communications methods.

The main effect on controllability is simply that you have far fewer copies
of those complex control systems, and the ones you do have can have a
correspondingly higher level of redundancy (since they don't have to be
capable of moving quickly or fitting into constricted work areas).

> Both seemed to indicate that
> today's computers simply don't have the storage or horsepower needed.
> I can understand storage, but given an intelligent program and
> glacially slow hardware, why can't it just be really slow?

In theory it could be, if you had the storage, but the result would be
useless. What good is a human-level AI that runs 10^6 times slower than a
real human?

> John Koza said that in numerous attempts to have a genetic program
> learn to model some tiny aspect of human intelligence or perception,
> perhaps equivalent to one second of brain activity (I know this doesn't
> really make sense, I'm fuzzy on the details and I don't recall any of
> the specific cases) that he found he required 10^15 operations
> (requiring months on standard PCs). So, a "brain second" is 10^15
> operations, and this huge number obviously poses a huge barrier to
> machine intelligence. Or something like that. I'll have to watch the
> webcast when it is available, seemed like an interesting point.

Well, not that big. There is an experimental 10^15 FLOPS system under
construction now (for ~$100M, I think), so it should only take a decade or
so for that kind of power to trickle down to the ~$1M systems that AI
researchers could actually get time on.

> Even while listening, I was confused concerning Koza's argument
> vis-a-vis the hardness of machine intelligence. It seems (as Kurzweil
> later pointed out concerning his speech recognition software) that once
> a genetic program "learns" a desired behavior, it can be copied
> infinitely, so the operations required to get to a certain level of
> functioning are mostly irrelevant.

The problem is that it looks like it will take 10^15 FLOPS just to run a
human-equivalent system in real time. Evolving it using GAs would probably
take years of run time for a much bigger system (say, 10^17 - 10^19 FLOPS,
so you can run a population of hundreds or thousands of programs). You
might be able to evolve small sub-components on more practical systems, but
putting it all together is a big problem.

Billy Brown
bbrown@transcient.com



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:03 MDT