I've read through the article and while it not complete hype
it stretches things somewhat.
It does not appear that they have invented general purpose
optical computing. They may have developed robust methods
for placing computations that are more efficient in the optical
realm into the digital realm. That advances a limited subset
of the compuatational phase space significantly. For example
it may "obsolete" the efforts of SETI@Home (something most
people know I view as useless from git-go).
On the other hand, I do not see how these methods significantly
advance such things as weather or molecular dynamics simulations.
Anders or others might comment on their relevance to neural
network applications but I speculate that their relevance
is low in those applications.
There is a *big* difference between the demonstration of
tera-ops performance for a specific application and the
demonstration of it for *all* applications. We need to
be mindful of that fact.
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:12 MDT