>From: James Rogers <firstname.lastname@example.org>
>I am curious as to people's opinions of Haskell as a programming language.
>Any comments on notable strengths or weaknesses/limitations (beyond the
>really obvious ones) would be appreciated. Of particular interest to me are
>systemic limitations that are difficult to program around, or which don't
>scale well conceptually.
I think Haskell's main strength is its completely denotational approach
to types. By this I mean the way in which you define types by a mechanism
quite similar in spirit to mathematical definition by algebraic expressions,
inductive schemas, case analysis, implicit composition-decomposition,
and higher-order capabilities. This gives you a level of abstraction that is
very neat, and lends itself to concise and clear capture of the conceptual
solution your program is providing. It also allows you to do an impressive
amount of logic reasoning on the program as an object, which would allow you
to provide mathematical proof justifications of critical behaviour.
I also think Haskell's main weakness is its completely denotational approach
to types. I know I'm sounding like a bad joke, but it's true in my eyes.
As much as people like me would like to crown denotational languages as
the pinnacle of programming, there are lots of real-world situations in
which this becomes a pain. Heavily operational behaviours can be coded in
a quite decent way but it requires you to deal with a lot of abstraction
for your result. This feels like a clumsy burden at times, though it is
my understanding that several years of practice with the "monadic style"
have produced many reasonably good pragmatic ways of going through it.
Another limitation is the module system, which is decidedly primitive
as far as module systems go.
Here our research groups have created wonderful work using Haskell. The kind
of work done seems to confirm my impression: it's amazingly good for problems
that can be expressed denotationally (as an example, the proof-editor
assistant Alfa that I use for my own research), and somewhat clumsy for
problems that are more of an operational nature. An interesting research
direction that has shown relief for this clumsiness comes from dependent
type systems (with which I work for my research too, but on a more math
oriented aim). We have some working prototypes (Cayenne) but they lack
development probably. A different approach has been introducing a layer
of reactive objects, so embedding operational behaviour in a clearer
way than monadic style. The current prototype (O'Haskell) looks good.
On a separate issue, I think Haskell is probably the best language to
introduce freshmen to programming. I've followed closely some teaching
experiences done by colleagues, and the difference in the quality of
learning and the scope of the mental toolkit students get seems quite
impressive compared with introducing them to programming with a
traditional operational language.
In short, I think Haskell hasn't been totally able to bridge the traditional
divide "denotational vs. operational", the decades-old complaint about
languages of its family based on the lambda calculus. Its systematic and
scalability limitations can be traced back to that divide, so you may find
it awesome for huge complex systems or you may find it mediocre, and it
seems to depend more on the denotational-or-operational nature of the problem
than on size or complexity.
I'm not sure if this was of any help, feel free to curse me for wasting
your time if I'm just babbling about stuff that you already knew. ;-)
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:13 MDT