Author of LFortran here. The historical answer is that both LFortran and Flang started the same year, possibly the very same month (November 2017), and for a while we didn't know about each other. After that both teams looked at the other compiler and didn't think it could do what they wanted, so continued on their current endeavor. We tried to collaborate on several fronts, but it's hard in practice, because the compiler internals are different.
I can only talk about my own motivation to continue developing and delivering LFortran. Flang is great, but on its own I do not think it will be enough to fix Fortran. What I want as a user is a compiler that is fast to compile itself (under 30s for LFortran on my Apple M4, and even that is at least 10x too long for me, but we would need to switch from C++ to C, which we might later), that is very easy to contribute to, that can compile Fortran codes as fast as possible (LLVM is unfortunately the bottleneck here, so we are also developing a custom backend that does not use LLVM that is 10x faster), that has good runtime performance (LLVM is great here), that can be interactive (runs in Jupyter notebooks), that creates lean (small) binaries, that fully runs in the browser (both the compiler and the generated code), that has various extensions that users have been asking for, etc. The list is long.
Finally, I have not seen Fortran users complaining that there is more than one compiler. On the contrary, everybody seems very excited that they will soon have several independent high-quality open source compilers. I think it is essential for a healthy language ecosystem to have many good compilers.
I appreciate the usecase you're describing but I feel like "that has good runtime performance (LLVM is great here)" is at odds with the rest of that list. I don't see that as a problem though. I think it's useful to have a very fast lightweight compiler for development and prototyping (both of projects and the compiler itself) and a slower one for highly optimized production releases.
Yes, that's one answer. But I actually think you can absolutely have both in the same compiler. But you need to have two backends, as I described. You use the custom backend for development (fast to compile) and you use LLVM for release (fast to run).
Well yes, but at that point aren't you effectively writing two compilers in one? It seems to me that would be at odds with wanting it to be fast to compile itself (you said under 30s) and would make it more difficult for the uninitiated to contribute to.
I don't mean to discourage and I don't disagree with the aims. I just have the (possibly mistaken) impression that compilers inevitably fall on a continuum from simple and fast to complex and slow.
No, you "only" have to write a backend. Which is still a lot of work, but much less work than another compiler. There are multiple ways it can be designed, but one idea that I like the most right now is to create an API for LLVM, and then implement the same API ourselves with a fast backend. That way we reuse everything in current LFortran, including lowering to LLVM, we just have to refactor it to use our own API instead.
For extra fun, "flang" actually refers to three different projects, one of which also completely rebuilt its frontend, so there are four separate "flang" frontends to LLVM. And I know of at least one proprietary LLVM-based Fortran compiler that's not based on any of the flags, and I suspect there are a few more lurking about.
The short answer as to why there are so many different LLVM Fortran frontends is that open source community took a long time to find someone willing to commit to a Fortran frontend to LLVM, and even when that happened, the initial results weren't particularly usable--it's not until I want to say late 2020 or early 2021 that there's an in-LLVM-tree flang project, and the exe isn't renamed from flang-new to flang until 2024.
I started writing the new "f18" Fortran front-end at NVIDIA in the summer of '17, and it was added to LLVM as "flang" in June 2020. There were already at least three other "flang"s at the time, one of which was our open-sourced llvm-targeting pgifortran compiler.
I still call this latest one "flang-new" myself just because that's the only unambiguous name for it. The name confusion is not my fault, I promise.
There is Classic Flang, there is New Flang (part of LLVM tree), there is LFortran, there is Intel's ifx (also based on LLVM) and Nvidia's nvfortran (also based on LLVM, I think). And maybe even more.
Fortran ecosystem is actually more prolific, at least in terms of toolchains, including proprietary ones, than most of more popular languages.
Digression, but people sometimes forget that there is whole world outside of Python or JS, and that GitHub Stars or ShowHN posts do not easily translate to real world usage.
Today, there are at least 9 production-level surviving Fortran compilers (GNU Fortran, IFX, nagfor, nvfortran, XLF, Cray/HPE's ftn, Fujitsu's frt, old Flang-based Arm/AMD, and flang-new). This situation has advantages and disadvantages for our users. Their Venn diagram of equivalently implemented features is very much not a circle, and portability across compilers is really tough. The ISO standard is hardly clear and doesn't have a test suite or reference implementation, so it's been a very challenging task to make flang-new as easy to port existing codes to as possible.
They use the SPARK subset of Ada to develop the most critical parts of their DriveOS. This contributed to their success of getting DriveOS certified at the highest automotive safety standard, ASIL-D.
> This contributed to their success of getting DriveOS certified at the highest automotive safety standard, ASIL-D.
ASIL is just a risk classification scheme from A to D, with D being the highest risk of initial hazard.
TUD SUD certified that Drive OS is ISO-26262 complaint and that it can be used for a safety-critical application up to the highest risk context of ASIL-D (Think activating brakes on a AEB system, or deploying airbags).
I believe TIOBE counts by search activity for a given token. I.e. large search volume of the token "Ada" would show up in TIOBE, whether it is for the line of graphics cards from NVIDIA or the programming language.
i gave up on the problem about 18 months ago so i didn't keep up with the research area. is this yours? the runtimes are of course very good but i don't see a comparison on how good the approximation is vs telamalloc (or just ILP). i'll say this though: it's miraculuos that the impl is so small.
> Right now I'm looking into integrating it with IREE
clever guy. IREE is just about the only serious/available runtime where you can do this because IREE codegens the runtime calls as well as the kernel code. But you're gonna have to either patch an existing HAL or write a new one to accomplish what you want to accomplish. If you want I can help you - if you go to the discord (https://discord.gg/J68usspH) and ask about this in #offtopic I'll DM you and can point you to the right places.
And they usually looks and works like a Borg tentacles and assimilated production hardware, despite the "emulator" nomenclature, apparently by tradition from very early days of microprocessors when ICEs actually were alternate implementations for chips it emulates
It's interesting, first time I wrote C was after learning programming through Java. My "C" code was all new_<type>(..) .. I couldn't not think in Java syntax.
considering that there are about 300 million native Bengali speaking people in the world (mostly in Bangladesh, and West Bengal which is a state in India).
It's not how many people speak it. It's how many people want to learn it as a second language. How many people would be in a position to want to learn Bengali if they don't already live there?
The choice of Common Lisp is less flattering than one might hope because it was largely the result of familiarity (Carl de Marcken's doctoral thesis was in computational linguistics and he was most comfortable using Lisp). When I asked him (c. 2007) whether he would have chosen Common Lisp again, he said that he wouldn't have and that he would have chosen Java instead. I don't recall any mention of technical reasons during that exchange (maybe static analysis?), but I do vaguely recall hiring considerations.
(Also, while much of the "business logic" was written in Lisp, a good chunk of low-level stuff was written in C++.)
reply