Another reason monads are cumbersome in other languages is that other languages use lazy evaluation. If you consider the naively implemented function
skipMany :: Parser a -> Parser ()
skipMany p = (p >> skipMany p) <|> return ()
and then consider how you might write it in some other language, you'll note that the translation, strictly evaluated, gives infinite recursion. To practically work with monadic computation requires some form of non-strict evaluation. Another, less catastrophic example would be the following.
printNTimes :: Int -> String -> IO ()
printNTimes 0 s = return ()
printNTimes n s = putStrLn s >> printNTimes (n - 1) s
Strictly evaluated, that would build up Theta(n) memory usage.
For this reason, the more computation-oriented monads cannot be used practically in other languages without added weight to avoid strict evaluation.
Ironically, it's the non-monadic uses that cause the problems -- those that don't need the full power of 'bind', i.e. uses where the monad's user doesn't pass return values from previous actions to future actions. When you pass a return value, you end up needing a lambda expression, which cuts off strict evaluation. When you only use them in ways that applicative functors can be used (which often applies for parsers), that's when you suffer. It's not the end of the world; there certainly are workarounds, but that requires adding explicit promises everywhere, which is just a pain.
The first example, naively translated to scheme, results in:
(define (skip-many a)
(or (and (a) (skip-many a))
(void)))
This is not infinite recursion. The Parser monad used in the Haskell version is simply a stream dressed in monadic clothes (and even C has streams a-la FILE*).
As for the second example, the naive translation to scheme is:
Neither of your translations convert your code to anything that has to do with monads, and your first translation is just plain broken.
Your print-n-times doesn't return an action---it's an imperative function that performs an action.
The naive translations to Scheme would be as follows:
(define (skip-many p)
(<|> (>> p (skip-many p))
(return 'nothing)))
(define (print-n-times n s)
(if (= 0 n)
(return 'nothing)
(>> (put-str-ln s)
(print-n-times (- n 1) s))))
In order to fix the infinite recursion in my first example, the parsing framework I made in Scheme had a macro `thunk-parser` (actually, it had a shorter, stupid-sounding name) which for all intents and purposes just wraps the expression in a (lambda ()).
Agreed; neither of the translations use monads, because monads are a semantic hand-job proscribing order within a declarative language. However, Scheme is not a declarative language, therefore order is implied in the structure, therefore monads aren't necessary.
>> Your print-n-times doesn't return an action...
Nope, look again. It returns the results of applying `void'. Who knows what (void) may return in a loosely-typed language?
>> ...it's an imperative function that performs an action.
Well, considering Scheme is an imperative language with side-effects, it's hard to dodge such criticism.
>> The naive translations to Scheme would be as follows...
That's not a translation from Haskell to Scheme, that's writing Haskell using a prefix syntax.
>> Also, both your examples rely heavily on non-strict evaluation. 'and' and 'or' are both macros that non-strictly evaluate expressions.
I went back and re-read your original post:
>> ...you'll note that the translation, strictly evaluated, gives infinite recursion.
Mea culpa--I completely missed your point that translating from a turing complete language to a non-turing complete language might have some pitfalls.
Then maybe I'm misunderstanding the point of your reply. I was pointing out how monads are inconvenient to use in strictly evaluated languages. Why did you follow up with an example about how not using monads is convenient?
>> Then maybe I'm misunderstanding the point of your reply.
I was pointing out that your examples don't make sense, as it's trivial to
translate the Haskell code into another general purpose language (such as
Scheme, C, Perl, Fortran, etc), while impossible to translate into an eager (strictly evaluated) language (such as...?).
Additionally, the reason monads are cumbersome in other languages has
nothing to do with eager/lazy evaluation, or conditional computation, and
everything to do with the fact that monads exist in Haskell to solve it's
sequencing problem (arising from the fact that it's a declarative language).
Imperative languages have no problem with sequencing, and therefore have no
need for monads.
The bottom line is that monads are not practical without some kind of syntactic sugar like do-notation. Without that, you have to write operations as a chain of nested lambdas, which becomes very ugly very quickly.
This is probably why monads haven't taken off in any language beside Haskell.
I liked the article, but it was very much "in the weeds".
I hack Scheme, and have never written any Haskell, though you have to know some to read the papers. But when programming in Scheme I find myself wanting monadic operations, like the seed in Oleg's SSAX parsers. But my code that rhymes with monads requires tail recursion, and the author didn't mention tail recursion at all.
I prefer to see monads as fold + tail recursion -- and without the latter, it seems you need the mutation that Common Lisp encourages.
I don't think this is accurate, monads are orthogonal to both folding and tail recursion. Monads are types representing computation, and providing the operations `bind' and `return'. They're much simpler than many imagine at first.
"... and without the latter, it seems you need the mutation that Common Lisp encourages."
I agree that when using looping constructs other than recursion, it is often required that a counter or accumulator be mutated. Common Lisp does not encourage _general_ mutation, however, and the standard functions and macros illustrate that.
In this case, the LOOP is about 10x faster than the reduce, probably because it conses about 5000 times less memory. CL is not set up the same way Haskell is, and hence you want to reuse data structures (or never create them) where possible. This is very imperative and very non-functional. (But I am not saying it's bad, LOOP (or ITERATE, which is what I use "in real life") is very easy to read and write.)
I have a feeling you don't have much CL experience...
There really aren't very many CL destructive (mutating) list functions, and I personally almost never use them except on fresh lists (like for example when I nreverse a list I just created with successive pushes) and I'm pretty sure it's standard practice.
I took some liberties but that's about 12/58 = 20% destructive functions (push, pop, pushnew are usually used on variables, not list structure). And good style dictates to use them very sparsely... Also, (loop for x in some-list sum (* 2 x)) is pretty functional, you're just mutating x locally but who cares, this is exactly like saying (apply #'+ (mapcar (lambda (x) (* 2 x)))) which is obviously functional.
Point conceded. The post I replied to says "CL discourages mutation" which is not true. I said "nearly all sequence functions are destructive", which is also not true.
Upon further reflection, I think we can say CL doesn't really care if you mutate or not. There's more than one way to do it :)
But OTOH, if you are careful about mutating when possible, your program will run faster. Not the most important thing in the world, I agree, but it is a "reward" for mutating things.
Are you sure you know what monadic computation is? (I would say, "are you sure you know what monads are", but I don't like to use adjectives as nouns.)
> Are you sure you know what monadic computation is?
I think so, but of course I could be wrong. In my ignorance, I understand monadic computation as a functional state transfer in which a seed is threaded through some computation -- be it for purposes of I/O, or in general fold operations.
Have you seen the SSAX parser? To my mind, the way that it builds its "answer" by threading seeds through the parse tree is monadic. What do you think?
It depends on how important you find purity and strict typing. They are both useful, but the mistakes that experienced programmers make are not usually related to types. Having recently started writing "real" software with Haskell, I've noticed that the type system doesn't save me a lot of time. I still have to debug stupid errors, like calling a function with the right type signature but with the wrong effect. By the time I've debugged and written tests, I haven't gained a whole lot over Lisp or Perl. The advantage of Haskell is that it's as expressive as either of those, but executes a lot faster. (One concurrent app we ported was very quick to use 100% CPU on Perl with just a few outstanding operations. In Haskell, it gets to 10% CPU and runs out of file descriptors. Wow!)
As for monadic computation in Haskell, it is very comfortable. You can create an environment where you only have exactly what you need, which makes your program easier to read and reason about. (No "IO a" signature? This function doesn't do IO.) It is not something I can't live without in other languages, but it does solve the intended problem very cleanly. (Similar abstractions like monoids and functors are also nice ways of looking at their respective problems.)
The author of this article is right, though -- in a language with mutable state and without type inference (for return), monads are just not worth the bother. They won't give you anything you don't already have.
I still have to debug stupid errors, like calling a function with the right type signature but with the wrong effect
I wonder if a way of writing tests which is significantly easier (taking one or two magnitudes less time) would help here?
Say for example, something like Python's doctests which run inside your IDE... it seems my standard way of development anyway - to write a piece of code, test it in the REPL, repeat.
One of the problem using monads in dynamically typed languages is, as this article suggests, the need to overload 'return' operation by the type of results, not by the type of arguments. I once tried to address this issue.
For this reason, the more computation-oriented monads cannot be used practically in other languages without added weight to avoid strict evaluation.
Ironically, it's the non-monadic uses that cause the problems -- those that don't need the full power of 'bind', i.e. uses where the monad's user doesn't pass return values from previous actions to future actions. When you pass a return value, you end up needing a lambda expression, which cuts off strict evaluation. When you only use them in ways that applicative functors can be used (which often applies for parsers), that's when you suffer. It's not the end of the world; there certainly are workarounds, but that requires adding explicit promises everywhere, which is just a pain.