Eugene Leitl [eugene.leitl@lrz.uni-muenchen.de] wrote:
>Mapping an algorithm into reecofigurable hardware is *faster* than
>doing it all-purpose hardware, unless you can have a silicon
>foundry within your computer which can churn out new dedicated
>ASICs at a MHz rate.
>In fact reconfigurable hardware
>allows the creation of very dynamic, hyperactive machines
>with unified data/code, the most efficient things theoretically
>possible.
>Essentially, we haven't seen a single
>revolution in computing yet since 1940s.
>Of course. The essence of Wintel's success. What I don't understand is
>why after all these years people are still buying it, hook and sinker.
However, that's changing now; the K7 looks like it could give the P-III a real run for its money, and Windows is making up a large fraction of the cost of cheap PCs. Plus open source software greatly simplifies the process of changing CPU architectures; just recompile and you can run all the software you used to run.
>Why paying for one expensive, legacy-ballast CPU and invest in nine
>others hideously complex designs (possibly more complex than the CPU
>itself), each requiring individual resources on the fab if you
>could churn out ~500-1000 CPUs for roughly $500 production costs?
Last I checked, a Z80 was a dollar or two a chip. Why aren't we all running massively parallel Z80 machines? Perhaps because building a machine with 500 CPUs will be much more expensive than buying them and writing software to do anything useful on them will be a monumental task?
>You can implement a pretty frisky 32 bit CPU core plus networking
>in ~30 kTransistors, and I guess have semiquantitive die yield assuming
>1 MBit grains.
But what good will it do for me? I used to work with Transputers, which were going to lead to these massively parallel computers built from cheap CPUs. Didn't happen, because there were few areas where massively parallel CPUs had benefits over a single monolithic CPU.
>Engines look very differently if you simultaneously operate on
>an entire screen line, or do things the voxel way.
I find discussion of voxel rendering pretty bizarre from someone who complains about my regarding 32MB as a 'reasonable amount' of memory for a graphics chip. Reasonable voxel rendering is likely to need gigabytes of RAM, not megabytes.
>The reason's why we don't have WSI yet are mostly not technical. It is
>because people don't want to learn.
Anamartic were doing wafer-scale integration of memory chips more than a decade ago; from what I remember, they needed a tremendous amount of work to test each wafer and work out how to link up the chips which worked and avoid the chips which didn't. This is the kind of practical issue which theoreticians just gloss over, and then 'can't understand' why people don't accept their theories.
Mark