Eugene Leitl [firstname.lastname@example.org] wrote:
>Mapping an algorithm into reecofigurable hardware is *faster* than
>doing it all-purpose hardware, unless you can have a silicon
>foundry within your computer which can churn out new dedicated
>ASICs at a MHz rate.
Yet every attempt to do this I remember ended up running a lot slower than dedicated hardware; firstly because they had to keep reconfiguring the chip to do different things, which took a long time, and secondly because they couldn't run as fast as dedicated chips.
>In fact reconfigurable hardware
>allows the creation of very dynamic, hyperactive machines
>with unified data/code, the most efficient things theoretically
Perhaps... but there's a big difference between what's theoretically better and what's practically better. So far there's no good reason for believing that this kind of hardware is really better.
>Essentially, we haven't seen a single
>revolution in computing yet since 1940s.
Yet we've seen probably a million-fold improvement in computing performance in that time, and probably a thousand-fold reduction in cost. What more would a 'revolution' have given us?
>Of course. The essence of Wintel's success. What I don't understand is
>why after all these years people are still buying it, hook and sinker.
However, that's changing now; the K7 looks like it could give the P-III a real run for its money, and Windows is making up a large fraction of the cost of cheap PCs. Plus open source software greatly simplifies the process of changing CPU architectures; just recompile and you can run all the software you used to run.
>Why paying for one expensive, legacy-ballast CPU and invest in nine
>others hideously complex designs (possibly more complex than the CPU
>itself), each requiring individual resources on the fab if you
>could churn out ~500-1000 CPUs for roughly $500 production costs?
Last I checked, a Z80 was a dollar or two a chip. Why aren't we all running massively parallel Z80 machines? Perhaps because building a machine with 500 CPUs will be much more expensive than buying them and writing software to do anything useful on them will be a monumental task?
>You can implement a pretty frisky 32 bit CPU core plus networking
>in ~30 kTransistors, and I guess have semiquantitive die yield assuming
>1 MBit grains.
But what good will it do for me? I used to work with Transputers, which were going to lead to these massively parallel computers built from cheap CPUs. Didn't happen, because there were few areas where massively parallel CPUs had benefits over a single monolithic CPU.
>Engines look very differently if you simultaneously operate on
>an entire screen line, or do things the voxel way.
I find discussion of voxel rendering pretty bizarre from someone who complains about my regarding 32MB as a 'reasonable amount' of memory for a graphics chip. Reasonable voxel rendering is likely to need gigabytes of RAM, not megabytes.
>The reason's why we don't have WSI yet are mostly not technical. It is
>because people don't want to learn.
Anamartic were doing wafer-scale integration of memory chips more than a decade ago; from what I remember, they needed a tremendous amount of work to test each wafer and work out how to link up the chips which worked and avoid the chips which didn't. This is the kind of practical issue which theoreticians just gloss over, and then 'can't understand' why people don't accept their theories.
The reason we still use monolithic chips is not that people are afraid of trying other solutions, but because we have tried those other solutions, and so far they've failed. That may change, but I don't see any good evidence of that.