Doug Bailey wrote:
> A Colorado partnership consisting of three people claims to have
> developed a technology to turn the Internet into one massive
> supercomputer. I'm a bit skeptical of this claim but the PR
> Newswire deemed it fit enough to publish. It will be interesting
> to see the technical specs of this concept. If tenable, it will
> be more interesting to see what, if any, emergent properties
> develop due to the implementation of such technology..
That loud bang was my hype meter bouncing off the ceiling. Hang on a sec while I turn down the gain...
> "FORT COLLINS, Colo. Jan. 11. An invention that will exponentially
> increase the speed and capacity of Internet data flow and create the
> world's first global "virtual mainframe" was announced today by
> Copernicus Technologies."
People have been running custom distributed programs across the Internet for a few years now. This looks like a standardized way of doing the same thing. It could be useful for solving some types of problem, but it isn't going to change the carrying capacity of the network.
> "Copernicus2 TM uses high-resolution GPS timing signals across the
> Internet to create a massive parallel supercomputing system."
Cute trick - but that means you need to buy their GPS receiver to use the system. That's going to slow down adoption markedly, even if the system works.
> "Copernicus2 TM will structure the Internet or any other network to
> function as a single supercomputer. This massive parallel
> processor will
> be capable of executing complex programs at extremely high speed.
But only if the program is solving lots and lots of small, relatively independent problems. If you need one big calculation done, it isn't going to help much.
> Virtual modeling of the physical world in real time will be possible
> with greater accuracy and detail than with existing supercomputers. It
> also could be configured as a neural network, with each user
> representing a single node.
Yeah, right. The latency of the Internet is too high. You'd get a neural net where signals take hundreds of milliseconds to move from one node to another. The increase in computing power would be largely cancelled out by the drop in signaling speed.
The full article makes some highly unlikely claims about magical increases in bandwidth and layering multiple networks on the same infrastructure. I think someone goofed an interview and got their information scrambled.
Billy Brown, MCSE+I