> Are there technical reasons that Rust took off and D didn't?
As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:
* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.
* This whole D version 1 vs version 2 thingy
* This whole Phobos vs Tango standard library thingy
* This whole GC vs no-GC thingy
This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.
I think people forget this. I know a lot of folks that looked at D back when it needed to win mindshare to compete with the currently en vogue alternatives, and every one of them nope'd out on the licensing. By the time they FOSS'ed it, they'd all made decisions for the alternative, and here we are.
FOSS: DMD was always open source, but the backend license was not compatible with FOSS until about 2017. D is now officially part of GCC (as of v6 I think?), and even the frontend for D in gcc is written in D (and actively maintained).
D1 vs. D2: D2 introduced immutability and vastly superior metaprogramming system. But had incompatibilities with D1. Companies like sociomantic that standardized on D1 were left with a hard problem to solve.
Tango vs phobos: This was a case of an alternative standard library with an alternative runtime. Programs that wanted to use tango and phobos-based libraries could not. This is what prompted druntime, which is tango's runtime split out and made compatible, adopted by D2. Unforutuntately, tango took a long time to port to D2 and the maintainers went elsewhere.
gc vs. nogc: The language sometimes adds calls to the gc without obvious invokations of it (e.g. allocating a closure or setting the length of an array). You can write code with @nogc as a function attribute, and it will ban all uses of the gc, even compiler-generated ones. This severely limits the runtime features you can use, so it makes the language a lot more difficult to work with. But some people insist on it because it helps avoid any GC pauses when you can't take it. There are those who think the whole std lib should be nogc, to maximize utility, but we are not going in that direction.
As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:
* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.
* This whole D version 1 vs version 2 thingy
* This whole Phobos vs Tango standard library thingy
* This whole GC vs no-GC thingy
This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.