> I would also be interested in hearing you expound upon those things that are "very important for 2010s performance".
I'm not particularly an expert in this area, but my understanding from articles/mailing-lists/etc. was that there were inadequacies in the previous machine-description and IR frameworks that led to the revamping with Gimple and MD-RTL over the past few years. Some of that was just maintainability, but I thought there were also optimization-related issues with the old internals. Is that not the case?
It's hard to say without reference to specifics; certainly the MD files have been around since time-immemorial, even if they've been augmented with new features over the years. And the gimple bits were primarily aimed at making the in-memory representation more efficient, a goal which they accomplished.
In any event, as you say, things have been revamped, so whatever issues there were with the old internals no longer apply, right? So there's no point in dragging out old chestnuts like "GCC can't support modern architectures or modern optimization techniques", because that's not true anymore, right?
(I apologize if this comes across as harsh; I'm just tired of seeing LLVM articles where commenters appear to be drinking the LLVM Kool-Aid without having any idea what the LLVM folks are grousing about. Sometimes the LLVM folks have a point, sometimes they're just asserting their engineering decisions are superior, which is debatable, and sometimes they're just grousing because they don't seem to like GCC. It's hard to say exactly what's in view from commenters and from the LLVM folks themselves.
GCC currently supports 8-bit microcontrollers, 32/64-bit desktop chips, a few "nonstandard" VLIW and DSP architectures, and lots of other chips in between. I, for one, am impressed with how much Clang and LLVM have done, but I'll also be more impressed if, in a decade and a half, LLVM's architecture hasn't acquired some warts and it seriously supports more than two architectures. After all, GCC was, in many ways, state-of-the-art when it first came out too...)
A child will grow old, but that does not change the fact that it is young _now_.
LLVM _is_ the new kid on the block, and gcc _is_ old. GCC also is wise, but I think, given a) that we learned a lot about compiler writing since gcc first appeared and b) the support that LLVM has both in business and in academia, the writing is on the wall for gcc as the favorite compiler, first for X86 and ARM, later for other architectures.
Of course that may change; gcc can evolve. However, I doubt that will happen fast enough. the FSF does not like allowing proprietary compiler plugins and (I guess) has to little manpower to work on gcc.
> the FSF does not like allowing proprietary compiler plugins and (I guess) has to little manpower to work on gcc.
Fortunately, the FSF is not the entity driving development of GCC. In fact, I can't remember a commit in the last five years (there's been ~60k commits, so there'd be plenty to choose from) made by an FSF employee. So there are plenty of people and companies focused on moving GCC forward.
Ah yeah, that's fair. I'd also be impressed if LLVM doesn't have warts in a decade or two, which is sort of what I was trying to get at with the end of my comment--- that to the extent that LLVM has a cleaner architecture representation (if it does), it's mostly a function if it being a newer "clean 1.0", and not having yet had to adapt as architecture and optimization techniques change over time.
I'm not particularly an expert in this area, but my understanding from articles/mailing-lists/etc. was that there were inadequacies in the previous machine-description and IR frameworks that led to the revamping with Gimple and MD-RTL over the past few years. Some of that was just maintainability, but I thought there were also optimization-related issues with the old internals. Is that not the case?