Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

5-10 years ago the Intel C compiler produced significantly faster code than gcc (and clang was even worse back then), so there was a bigger reason to use it back then.


That was the story 10 years ago as well, yet I have never managed to find an open source program where the Intel compiler has produced faster code than gcc back then, too.

gcc has always produced faster code for at least 15 years. In fact, it is the Intel compiler which has caught up in the most recent version.


I got faster (10-20%) results with icc on an abstract game minimax AI bot back then (i.e. something similar to a chess engine). Even more so when taking advantage of PGO. Over time GCC caught up.

By nature, this code had no usage of floating point in its critical path.

I haven't bothered with icc in years though.


For what sort of application? I ran benchmarks of my own scientific code for doing particle-particle calculations and with -march=native I could get 2.5x better performance with Intel vs GCC.

One thing I found that you do have to be careful with though is ensuring that Intel uses IEEE floating point precision, because by default it's less accurate than GCC. This causes issues in Eigen sometimes, we ran into an issue recently after upgrading compiler where suddenly the results changed and it was because someone had forgotten to set 'fp-model' to 'strict'


If Intel is using floating point math shortcuts you can replicate it with -Ofast when using gcc.

It goes without saying that you should use -O3 (or -O2 for some rare cases) otherwise. I am mentioning it just in case because 2.5x slower sounds so exotic to me that the first intuition is that you're omitting important optimization flags when using GCC. GCC was faster than Intel on everything I tried in the past.


Once upon a time, Oracle used Intel C Compiler (ICC) to compile Oracle RDBMS on some platforms [1].

I don't know if Oracle is still using ICC for that or not. (If you download Oracle RDBMS, and check the binaries, you will be able to work it out. I can't be bothered.)

[1] https://www.businesswire.com/news/home/20030507005238/en/Ora...


How can you tell from a binary what compile was used to produce it?


There can be various traces left in strings, the symbol table, etc

Many compilers statically link implementations of various built-in functions into the resulting executable, and that can result in different symbol table entries


...and that despite not being anywhere near as aggressive with exploiting UB as gcc or clang, which shows that backend-based optimisations like instruction selection, scheduling, and register allocation are far more valuable (and predictable).


I don't think anyone disputes that? Most optimizing compiler literature doesn't even mention language semantics, the gains there are very much last-ditch rather than necessary.

I can't even find benchmarks of ICC vs a current GCC but they were pretty even the best part of a decade ago. GCC is a mess compared to LLVM but it's quick.


I'd be curious to know what compiler Unity and Unreal Engine are using.


I've never used Unity, but Unreal Engine is heavily tied into the Visual Studio (proper, not Code) workflow, including the Microsoft C++ Compiler toolchain and all 30GB+ of it's friends.

I'd suspect the same from Unity.


Both engines support platforms where Visual Studio is not available, right?


Unreal uses the native compiler for the target platform. Windows this is msvc. Modern consoles are all clang forks. Linux is the only exception where I think they depend on clang not gcc.


nitpick: Maybe Xbox One is built by MSVC?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: