Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for checking Julia out. I fear there is a fair amount of misinformation in your comment, however, so it seems that a few things were lost in your reading of Julia's code (some of which is admittedly quite tricky). I hope you don't mind if I address some of it.

The first point is that Julia already has excellent performance on a par with most compiled languages, including, e.g. Haskell, whether they are using LLVM or not. Straight-forward Julia code is typically within a factor of two of C. That's shown in the microbenchmarks on Julia's web site [http://julialang.org/], but, of course, that's not entirely convincing because, you know, they're microbenchmarks and we wrote them. However, similar performance is consistently found in real-world applications by other people. You don't have to take my word for it – here's what Tim Holy [http://holylab.wustl.edu] has to say: https://groups.google.com/d/msg/julia-users/eQTYBxTnVEs/LDAv.... Iain Dunning and Miles Lubin also found it to be well within a factor of 2 of highly optimized C++ code when implementing realistic linear programming codes in pure Julia: https://github.com/JuliaLang/julia-tutorial/blob/master/Nume.... The benchmarks appear on page 7 of their presentation.

This statement about Julia's high-level optimizations is entirely wrong:

> i've looked at how it compiles to llvm, and it basically punts any optimization to the llvm side, aside from the most basic tracing jit method monomorphization/specialization.

Julia does no tracing at all, so it's definitely not a tracing JIT. A relatively small (but growing) very crucial amount of high-level optimization is performed on the Julia AST before generating LLVM code. In particular a dynamic dataflow-based type inference pass is done on the Julia AST. Since Julia is homoiconic, this type inference pass can be implemented in Julia itself, which may be why you missed it: https://github.com/JuliaLang/julia/blob/master/base/inferenc.... Don't be fooled by the briefness of the code – Jeff's dynamic type inference algorithm is one of the most sophisticated to be found anywhere; see http://arxiv.org/pdf/1209.5145v1.pdf for a more extensive explanation. It's also highly effective: 60% of the time it determines the exact type of an expression and most of the expressions which cannot be concretely typed are not performance critical [see section 5.2 of the same paper]. You are correct that we leave machine code generation to LLVM – after all, that's what it's for – but without all that type information, there's no way we could coax LLVM into generating good machine code. Other very important optimization passes done on the Julia AST include aggressive method inlining and elimination of tuple allocation.

> more succintly, Julia lacks a clear enough thoughtful choice in static/dynamic semantics for the pre LLVM side to have an easy optimization story given a small sized core team, such optimization engineering will either take a long time to develop, or will require some commercial entity to sink serious capital into writing a good JIT / optimizer.

There is a very clear and thoughtful choice in static vs. dynamic semantics in Julia: all semantics are dynamic; there are no static semantics at all. If you think about your code executing fully dynamically, that is exactly how it will behave. Of course, to get good performance, the system figures out when your code is actually quite static, but you never have to think about the distinction. And again, Julia already has excellent performance, and we have accomplished that with an admittedly tiny and relatively poorly funded team. (All the money in the world won't buy you another Jeff Bezanson.)

> also: Julia doesn't have a type system, it has a dynamic check system. (Show me the code for static type check in the julia code base as it exists on github and I'll owe you a beer :) )

The academic programming language community has gradually narrowed their notion of what a type is over the past decades to the point where a type system can only be something used for static type checking. Meanwhile, the real world has gone full throttle in the other direction: fully dynamic languages have become hugely popular. So yes, if you're a programming language theorist, you may want to insist that Julia has a "tag system" rather than a "type system" and other type theorists will nod their heads in agreement. However, the rest of the world calls the classes of representations for values in dynamic language like Python "types" and understands that a system for talking about those types – checked or not – qualifies as a type system. So, while you are correct that Julia doesn't do any static type checking, it is still understood to have what most people call a "type system".

[There's actually an important point of programming language philosophy here: one of the premises of Julia is that static type checking isn't actually the main benefit that's brought to the table by a type system. Rather, we leverage it for greater expressiveness and performance, leaving type checking on the table – for now. This emphasis doesn't mean that we can't add some type checking later – since we can infer exact types 60% of the time, we can check that those situations don't lead to errors. We can also provide feedback to the programmer about places where they could improve the "staticness" of their code and get better performance or better "checkability". This let's the programmer use a dynamic style for prototyping and gradually make their program more and more static as it needs to be faster and/or more reliable.]

> Let me repeat: Julia doesn't have clear static semantics / phase distinction, and doesn't really seem to do anything beyond method specialization before passing code on to LLVM. This means it can't get compelling performance compared with any other LLVM backed language.

I'll repeat myself a bit too. Julia has a very clear static semantics – there are none. The run-time does quite a bit of analysis and optimization after method specialization (aggressive run-time method specialization is incredibly important, however, so one shouldn't discount it). And, of course, Julia already has compelling performance compared with other languages, both static and dynamic, in benchmarks and real-world applications.



Extremely helpful - this really clarifies the differentiation in the Julia approach and I'm excited your team is taking this direction. There's a lot of people cheering Julia on, even if we're wimping ut remaining on the sidelines.

It would help if this explanation was a bit more prominent.


you make interesting points. I'll have to read the links and think about it.


I'm also in New York if you want to chat about it over a beer some time. I doubt I can convert a hardcore Haskell programmer on my "scruffy" unchecked, dynamic point of view, but we might have a good conversation about it. We've occasionally quipped that Julia drags Matlab programmers halfway to Haskell, so maybe there's some common ground.


Sure, that'd be fun!

I think we've had 1-2 interactions where we've not quite gotten along, but I might have just been misinterpreting (or mixing up julia devs)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: