On 5/6/01 10:57 AM, "Jim Fehlinger" <email@example.com> wrote:
> James Rogers wrote:
>> one should be able to do AI on just about any reasonable piece
>> of silicon.
> Ah, now **there**'s the heart of the question, as far as I'm
> concerned. Don't matter what kind of software you write, if
> you don't have big enough iron to run it on. And what, precisely,
> will constitute "reasonable" hardware for this particular job?
Not relevant. AI is AI is AI, even if we don't have the hardware to run it
fast enough to be useful.
I don't think you understand the problem fully. Any evolvable hardware
architecture capable of doing AI implies the capability to generate a more
traditional architecture with roughly the same capability, just utilized
differently. Any algorithm is reducible to an optimal form for any
architecture, and given the proper utilization of the implied circuit
generation capabilities, should give within an order of magnitude of the
same performance. Evolvable hardware necessarily has a lot of overhead that
would be unnecessary on non-evolvable hardware, which is the trade-off for
speed at a specific task. You can't claim massive technological
improvements in evolvable hardware without applying the same technological
improvements to non-evolvable hardware. In the end, given a finite number
of transistors (or whatever) you can only squeeze out a certain amount of
computing power, whether the architecture is evolvable or not (ignoring
intentional stupidity on the part of the designer). So it is pretty much
six of one, half a dozen of the other (yada yada Kolmogorov yada yada yada).
> It ain't gonna be silicon, I can tell you that...
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT