Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
LLVM 7.0.0 released (llvm.org)
228 points by samber on Sept 19, 2018 | hide | past | favorite | 84 comments



Under "External Open Source Projects Using LLVM 7":

> Zig is an open-source programming language designed for robustness, optimality, and clarity. Zig is an alternative to C, providing high level features such as generics, compile time function execution, partial evaluation, and LLVM-based coroutines, while exposing low level LLVM IR features such as aliases and intrinsics. Zig uses Clang to provide automatic import of .h symbols - even inline functions and macros. Zig uses LLD combined with lazily building compiler-rt to provide out-of-the-box cross-compiling for all supported targets.

Nice shout out :)


Zig needs a Wikipedia page!

It seems like a competitor for Go or Rust? I've been looking into these languages a lot lately, but Zig looks interesting... I wish the website was better at showing off examples and the big features for the language though.


I'm a C coder, a big fan of Rust and while I've heard of Zig before I still don't really get what it's going for. As far as I can tell it's Rust without the borrow checker and the mental overhead that goes with it. Maybe it's because I'm wearing my Rust glasses but some design choices seem a bit strange to me, for instance regarding Zig's unions (equivalent to Rust's enums or C's tagged unions):

https://ziglang.org/documentation/master/#union

    const Payload = union {
        Int: i64,
        Float: f64,
        Bool: bool,
    };
[...]

    var payload = Payload {.Int = 1234};
    // payload.Float = 12.34; // ERROR! field not active
    assert(payload.Int == 1234);
    // You can activate another field by assigning the entire union.
    payload = Payload {.Float = 12.34};
    assert(payload.Float == 12.34);
[...]

    // Unions can be given an enum tag type:
    const ComplexTypeTag = enum { Ok, NotOk }; 
    const ComplexType = union(ComplexTypeTag) {
        Ok: u8,
        NotOk: void,
    };
[...]

    const c = ComplexType { .Ok = 0 };
    assert(ComplexTypeTag(c) == ComplexTypeTag.Ok);
That's weird ergonomics IMO and quite a lot of typing. As far as I can tell there's no type inference? Also I'm not sure why const is used everywhere, even for things like type declarations, does Zig allow messing with types at runtime? That seems like a pretty high level language feature so it's quite interesting if it does.

Anyway, from what I see it's basically somewhat halfway between C and Rust, with some of the syntactic sugar of Rust but not the borrow checker and everything that goes along with it.

That being said the language is still very young and at this stage Rust was a bit of a mess as well, what with the numerous pointer types, green threads and the like... Maybe as they progress toward 1.0 they'll be able to expose a clearer picture of what they're doing.


I'd be interested in seeing a code comparison of equivalent Rust and Zig code with regards to tagged unions, to demonstrate your claim. I could help you with the Zig code if you provide the Rust code.

Also to answer your questions:

> As far as I can tell there's no type inference?

You can see an example of type inference here: https://ziglang.org/documentation/master/#Hello-World

> does Zig allow messing with types at runtime?

Yes. https://ziglang.org/documentation/master/#Generic-Data-Struc...

Edit: my mistake, I misread or didn't see the "at runtime" part. It only allows messing with types at compile-time.


>I'd be interested in seeing a code comparison of equivalent Rust and Zig code with regards to tagged unions, to demonstrate your claim. I could help you with the Zig code if you provide the Rust code.

Well in Rust you don't have to define the enum and the union separately, actually in Rust unions and enums are effectively the same thing (outside of C FFI that is).

So you in Rust you have something like:

    enum ComplexType {
        Ok(u8),
        NotOk,
    }
Then to unpack it you always have to destructure it, you can't just do my_complex_type.Ok() or something like that. You can create accessor methods if that makes sense for your enum though, for instance for this ComplexType you could implement an "unwrap" function that returns the u8 in case of success and panics in case of an error. In general you tend to avoid those unless they really make sense, as a rule of thumb it's better to "match" the type (which seems similar to Zig's switch statement with enums).

Beyond that I dug a bit deeper in the docs and I don't quite understand how generics are supposed to work exactly, at first they look like what I'm used to in other languages but then I get to the generic printf example and I see this code:

    pub fn printValue(self: *OutStream, value: var) !void {
        const T = @typeOf(value);
        if (@isInteger(T)) {
            return self.printInt(T, value);
        } else if (@isFloat(T)) {
            return self.printFloat(T, value);
        } else {
            @compileError("Unable to print type '" ++ @typeName(T) ++ "'");
        }
    }
Is it just for the sake of the example or does it just exhibit a limitation of the generic system? What would happen if I wanted to be able to print a custom type for instance?

Also, if generics exist, why are things like "@clz" or "@intToFloat" compiler builtins, surely they could be implemented as part of the stdlib using a generic method? Although I see that "@memset" and "@memcpy" are also builtin and I can't quite figure out why so maybe I'm missing something here.


In Zig:

    const ComplexType = union(enum) {
        Ok: u8,
        NotOk,
    };
It's recommended to use destructuring syntax to unpack it, but you have the option to use fields directly which invokes runtime safety checks.

This matches the syntax for normal unions, which do not have tags, but still have runtime safety checks. Rust does not have this feature - unions must always have tags - where as in Zig the state that determines which tag is active in a union could be somewhere else.

> What would happen if I wanted to be able to print a custom type for instance?

    const std = @import("std");

    pub fn main() void {
        const ComplexType = union(enum) {
            Ok: u8,
            NotOk,
        };
        var ct = ComplexType { .Ok = 123 };
        std.debug.warn("{}\n", ct);
        ct = ComplexType.NotOk;
        std.debug.warn("{}\n", ct);
    }
Outputs:

    ComplexType{ .Ok = 123 }
    ComplexType{ .NotOk = void }
This is all implemented in userland in the standard library. Last time I checked, in Rust the equivalent code is implemented with a macro hard coded in the compiler.

> why are things like "@clz" or "@intToFloat" compiler builtins, surely they could be implemented as part of the stdlib using a generic method?

They could be implemented in userland, but then they would possibly not lower to the LLVM primitives that they represent. In the case of the casting functions, it's more to do with semantics of the language, making it hard to accidentally shoot yourself in the foot. Shooting yourself in the foot on purpose, however, is always allowed.


>Rust does not have this feature - unions must always have tags

Rust has had untagged unions since last year (https://blog.rust-lang.org/2017/07/20/Rust-1.19.html)


>This is all implemented in userland in the standard library. Last time I checked, in Rust the equivalent code is implemented with a macro hard coded in the compiler.

You're right and it's still the case as far as I know but I was thinking more about having a customized output, in Rust you can implement the Display interface and completely customize the way the object is serialized for output.

Your sibling comment by AnIdiotOnTheNet shows how to do just that but is it done purely in stdlib or is it actually implementable in the language itself? If it is actually doable in pure Zig I'd recommend showcasing that as part of the "generics" docs because as it is it's a bit odd to have example code that ends up exhaustively matching possible types, it kinds of defeat the purpose of generics and is the cause of my confusion.

Thank you for your answers by the way, they're appreciated. I'm very happy to see all this experimentation around low level "light runtime" languages lately, it's definitely sorely needed. I hope that 20 years from now C won't still be the alpha and omega of low level code.


The standard lib has no special privileges, it's written entirely using the same language features available to everyone.

There are probably 2 reasons the documentation uses a simplified example:

1) the ability to introspect a type for the presence of a field or definition before attempting to use it is relatively new

2) the actual implementation of fmt.format is rather long and involved


> The standard lib has no special privileges, it's written entirely using the same language features available to everyone.

Sort of -- it uses a lot of unstable features that are only available to outsiders with the nightly compiler.


Your parent is talking about Zig, not Rust, I believe.


Ah, ok. I suppose that's still a useful thing to say about Rust std though. :)


In addition to the default formatting in the standard library as mentioned by AndyKelley, adding a function called "format" to your type will override the standard behavior and allow arbitrary custom formatting. see this example from the test case in std.fmt:

  const Vec2 = struct {
      const SelfType = @This();
      x: f32,
      y: f32,
  
      pub fn format(
  	  self: SelfType,
  	  comptime fmt: []const u8,
  	  context: var,
  	  comptime Errors: type,
  	  output: fn (@typeOf(context), []const u8) Errors!void,
      ) Errors!void {
  	  switch (fmt.len) {
  	  	  0 => return std.fmt.format(context, Errors, output, "({.3},{.3})", self.x, self.y),
  	  	  1 => switch (fmt[0]) {
  	  	  	  //point format
  	  	  	  'p' => return std.fmt.format(context, Errors, output, "({.3},{.3})", self.x, self.y),
  	  	  	  //dimension format
  	  	  	  'd' => return std.fmt.format(context, Errors, output, "{.3}x{.3}", self.x, self.y),
  	  	  	  else => unreachable,
  	  	  },
  	  	  else => unreachable,
  	  }
      }
  };

  var value = Vec2 {
      .x = 10.2,
      .y = 2.22,
  };

  try testFmt("point: (10.200,2.220)\n", "point: {}\n", &value);
  try testFmt("dim: 10.200x2.220\n", "dim: {d}\n", &value);


There's no need for uncertainty about what Zig is going for. Rust code blows up when the heap runs out regardless of how carefully you write it unless you want to forego the standard library and whatever depends on it. Zig gives you control over memory allocation and hence at least the potential to write correct code.


> There's no need for uncertainty about what Zig is going for. Rust code blows up when the heap runs out regardless of how carefully you write it unless you want to forego the standard library and whatever depends on it. Zig gives you control over memory allocation and hence at least the potential to write correct code.

I hope that's not it. Talk about throwing out the baby with the bathwater.

It's true that Rust's standard library panics on ENOMEM, but not only is this not inherent to the language, it's possible to extend the standard library, and folks are doing so. See https://github.com/rust-lang/rust/issues/48043 for support of "fallible allocations".


I agree that Rust has a lot of good ideas in it. The notion of ownership isn't far off my own way thinking about pointers when writing code with pointers in it, and I welcome any help from the compiler at avoiding my stupid mistakes, but if worse comes to worst, at least my mistakes are fixable. In the old days it would have been considered bad practice to postpone considerations of storage limitations to some future release, but maybe I'm just showing my age.


The decision was made with pragmatism in mind. What do most programs in Rust want to do? Most of them are fine with panic on an error. Making your code robust against this, at least in Rust, makes things a lot more cumbersome, and most programs would panic upon such an error anyway. Most programs that care about this possibility aren't using the standard library anyway. Therefore, for the standard library, the decision was made to panic on OOM, and to not block the release of Rust 1.0 on something that, at the time, wasn't itself a stable feature.

Three years later, I'm comfortable with this decision; three more years of waiting on Rust 1.0 would have killed it as a language. Many, many useful programs have been written in Rust since then.

Of course, you can disagree with this decision; I mostly want to communicate that it was not a flippant "who needs to care about that," but a careful decision made after weighing the tradeoffs. In the old days, those tradeoffs were very different.


It seems like a weird choice to me. Zig's standard library functions that require allocation take the allocator as a parameter, and return oom errors back up to the caller to do with as they please. Usually, the caller will also just kick that error up until it reaches the top of the call stack and gets caught by the default panic handler, but at any point they can choose to handle it however they want. Handling the error by kicking it up the call stack is a single keyword at the callsite.

As a bonus, this means that when you're reading code you can easily spot the function calls that have a possibility of failing.


Yeah, I specifically added the "in Rust" here because I know Zig really cares about this problem. I haven't actually used contemporary Zig enough to be able to say anything about its approach, though.

> Zig's standard library functions that require allocation take the allocator as a parameter, and return oom errors back up to the caller to do with as they please.

You could do something like this in Rust. Here's a function that accepts a vector and pushes something onto it:

  fn foo(v: &mut Vec<i32>, item: i32) {
      v.push_back(item);
  }
(Keeping it non-generic to simplify.)

Does the data structure hold the allocator, or the function? What if I created a list with one allocator, but called push_back with another? It'd have to be in the structure, right? Given that, here's what this feature would look like, roughly, if Vec<T> supported it:

  fn foo<A: Allocator>(v: &mut Vec<i32, A>, item: i32) -> Result<(), Box<Error>> {
      v.push_back(item)?;
  }
This is significantly more boilerplate. The type signature is more than twice as long! You'd also have to adjust the caller:

  // before
  foo(&mut v, 5);

  // after
  foo(&mut v, 5)?;

Of course, it'd be possible to maybe reduce this with langauge features. What does this look like in Zig?


Here's an excerpt from std.ArrayList[1]:

    pub fn ArrayList(comptime T: type) type {
        return struct {
            const Self = @This();

            raw_items: []T,
            len: usize,
            allocator: *Allocator,

            // ...

            pub fn append(self: *Self, item: T) !void {
                const new_item_ptr = try self.addOne();
                new_item_ptr.* = item;
            }
            
            pub fn addOne(self: *Self) !*T {
                const new_length = self.len + 1;
                try self.ensureCapacity(new_length);
                return self.addOneAssumeCapacity();
            }
            
            // ...
        };
    }


    test "std.ArrayList.basic" {
        var bytes: [1024]u8 = undefined;
        const allocator = &std.heap.FixedBufferAllocator.init(bytes[0..]).allocator;

        var list = ArrayList(i32).init(allocator);

        try list.append(1);
        try list.append(2);
        try list.append(3);

        assert(list.len == 3);
    }

In summary, functions that can fail have an inferred error set, or an explicit one[2]. At the callsite of functions that can fail will either be `if`, `while`, `catch`, or `try`, in order to deal with the error case.

[1]: https://github.com/ziglang/zig/blob/e3d8cae35a5d194386eacd9a...

[2]: https://ziglang.org/documentation/master/#Error-Set-Type


Neat! So yeah, reducing Result to ! and inferring errors makes this way less boiler-plate-y. One of Rust's core design decisions is that we should never do type inference for function definitions, so we can't do this.

And yeah, if you hold a pointer to the allocator instead of parameterizing by it, that helps too. With that in mind,

  fn append(&mut self, item: i32) -> Result<(), Box<Error>> {
      v.push_back(item)?;
  }
In today's Rust, which is sort of what already happens; a given instance can't change the allocator, but refers to the global one. I don't have a great handle on the tradeoffs from keeping a pointer vs parameterization.

Anyway, yeah, if we allowed for inferring the error type, that would make this almost the same as Zig here. Stealing your syntax:

  fn append(&mut self, item: i32) -> !() {
      v.push_back(item)?;
  }
A four character diff from current Rust. Very cool! Thanks for the elaboration. That's quite nice.


What do these Zig functions do in systems with overcommit ? (that is, Linux, MacOS, *BSDs, Android, iOS, ...).


Same thing as everyone else: play russian roulette with the OOM-killer and hope that whatever was important to the user didn't get destroyed. Can't really do anything about the OS lying to you.


How do Zig programmers using Linux or MacOS validate that their OOM handling is correct?


There's an allocator in the standard library that is specifically designed to fail.

https://github.com/ziglang/zig/blob/master/std/debug/failing...


> In the old days it would have been considered bad practice to postpone considerations of storage limitations to some future release, but maybe I'm just showing my age.

Ever since Unix had copy-on-write data pages, it has had effectively the same behavior around OOM. The semantics of fork() require it, unless you're OK with the idea of a process that uses more than half of the available memory being unable to launch any child processes. So the idea of killing processes on OOM is very old indeed.


Windows doesn't overcommit. On Linux, you can turn it off. And on Linux, we have vfork.


Because Windows (roughly speaking) doesn't use the fork/exec model for process spawning, it doesn't need overcommit. If you use fork/exec, then overcommit is the least bad of all the bad options.

Anyway, whether overcommit is a good idea is somewhat tangential. The point is that portable programs cannot rely on being able to detect and recover from OOM in the same process. That's why you see so few programs try. (The programs that do try frequently suffer from security vulnerabilities in their OOM handling paths, ironically enough.)


> If you use fork/exec, then overcommit is the least bad of all the bad options.

vfork is better than fork/exec with overcommit.

It's interesting to me that you mention portability as an argument for why an out of memory situation should cause a crash rather than be handled like any other error.

Let's say I make a library and I want this library to be maximally portable. That means it should run in some of the following environments:

* Microsoft Windows

* embedded software

* operating system kernels

* drivers

* real-time software

In all of these environments, a library that handles out of memory correctly would be more portable than a library that panics.

If only there was a programming language that helped programmers solve this problem, while preventing the pitfalls that result in these security vulnerabilities that you mention...


The Rust team took the approach of targeting application code on the three primary Tier-1 platforms, macOS, Windows, and GNU/Linux, as the goal for API design and portability of the standard library.

Besides the standard library, it also has the core library, which is a small subset that is appropriate for all environments, even those without allocation or those with different strategies for allocation.

With this goal in mind, many of the decisions make sense. A portable application can't rely on getting errors on out of memory situations. Many people don't write code that handles out of memory, or if they do they don't test it, which can lead to significant bugs. Error handling for out of memory is cumbersome if it is required for every operation which could allocate memory, such as pushing to a Vec.

Additionally, Rust 1.0 didn't have some of the later features to make error propagation simpler, so functions which could return errors were more cumbersome to use.

And finally, fallible allocation can be added after the fact. It is being worked on: https://github.com/rust-lang/rust/issues/48043

For now, anyone who wants to use Rust in those situations, but still wants allocation support, needs to implement their own allocator and collections with fallible operations. Once the allocator API and fallible collections support lands, much more of the Rust standard library will be usable in such an environment.


I agree that if you're in kernel space, then you have to be able to recover from allocation failure. However, it's a waste of time for most programs (or libraries—most libraries I write can't go into kernel space for various reasons) to try to recover from OOM. Even if not memory-unsafe, it results in untested error handling paths and noisy code.

I rarely write Windows-only programs, so I can't rely on OOM handling in my logic. It's much simpler to just have the same OOM story everywhere (crash).


Most zig code still handles OOM via panic. You'd have to go out of your way to somehow make it less safe than that. Dealing with an error by passing it up the call stack (eventually to the panic handler) is actually easier than ignoring it:

  //kick any error up the call stack
  var buffer = try allocator.alloc(u8, 500);

  //in debug builds, assert that this will not fail
  //in release-fast failure will result in undefined
  //behavior.
  var buffer = allocator.alloc(u8, 500) catch unreachable;


> On Linux, you can turn it off.

The system administrator can turn it off, kind of. But a zig or rust program running in user space without privileges cannot turn it off, not even for the program itself.


> In the old days it would have been considered bad practice to postpone considerations of storage limitations to some future release, but maybe I'm just showing my age.

Do you run on Linux? Do you leave vm.overcommit_memory at the default 0? Do you run on 64-bit platforms? Most people here I imagine would say yes, yes, and yes. If so, ENOMEM is not a thing that happens. Some process on the machine will get OOM-killed instead. It's silly to write a bunch of dead code.


> In the old days it would have been considered bad practice to postpone considerations of storage limitations to some future release, but maybe I'm just showing my age.

Ignoring my other point (that ENOMEM doesn't happen on the most common deployment environment), what do you think the correct ENOMEM behavior should be?

In many situations, I think it's to crash (what Rust does anyway). For example, at work I write cloud servers, which are replicated. If there's say a memory leak that is causing it to have not quite enough RAM, I'd rather it go down and restart cleanly that try to limp on. The other servers will take the load in the meantime. (They might all crash in waves, which is sad but better that than all being unable to serve well and lingering in that state. If you don't crash, then you need some external agent that decides when to restart them, and you end up with a pretty similar outcome with more complexity and rarely exercised logic.)

There are certainly specialized environments where you want to do something else, but it sounds like you're saying it's thoughtless/wrong for the program to crash, and in general I disagree.


I never view Go as a competitor to C, and Rust is more of an competitor to C++.

Zig reads to me as something actually trying to displace C. Or C with no corner cases and some modern features.


Can you explain what constitutes a competitor to C? C++ and Rust (possibly Zig, I’ve barely looked at it) are both in the same category of low overhead languages that can operate in the same restricted environments as C. I don’t understand the distinction being drawn here.


Zig maintains much more of the smaller scope and minimal feel of C compared to a C++ and Rust.

Sure C is more convenient that C++ in a few domains, but I'd suspect that plenty of C fans that "hate C++" will find more to like in Zig. It's about the features it doesn't have.


Ah, so more talking about the competition to win over the mindshare of people who enjoy C, as opposed to technical competition and merits of the languages.

I might personally describe that differently, as I felt like the GP was implying that C++ and Rust couldn't be used in the same space as C, which isn't quite true.

One challenge C++ has always had is that the language overhead, while easier in some contexts, didn't bring with it significant safety guarantees. As more people are introduced to safer, low overhead languages, I anticipate that to change.


> Zig reads to me as something actually trying to displace C

Shout out for DABC, another really enjoyable-to-work-with system in that space!

https://dlang.org/spec/betterc.html


Go is definitely a competitor to C, but only in places where you can tolerate a garbage collector. Zig seems to be a competitor to C in places where you can't.


I would say Go is more a competitor to Java.


Not until they up their game in modern language features.

Until then they are pretty much a competitor to what would be otherwise a userspace C application.

Assuming having a GC enabled language isn't an issue to meet application SLAs.


I'd say Go is a competitor to Java in terms of business case, not language features. In a similar way it competes with Python. You occasionally hear (heard? It's been a while) of Python projects getting rewritten in Go.


Besides the language features, Go is not a match for the wealth of graphical tooling, IDE support, deployment targets and libraries of Java world.

Naturally people port code from Python to Go.

It is what happens when one uses scripting languages for a full blown application and then discovers how slow it is.

Then Go gets choosen cause Google is cool and Oracle is bad.

Meanwhile corporations keep using Java, .NET, JavaScript and C++ as their main workhorses.

Go's selling points are Docker and Kubernetes, and not everyone is actually using them.


Just because the language has low language feature count doesn't make it a competitor to C.


It does given the authors and the use cases Google has been applying it to.

TCP/IP stack and file system utilities in Fucshia, Kubernetes, Android OpenGL/Vulkan debugger, UEFI replacement, DNS server


I disagree, Go ignores errors by default. No sane C dev would accept that as an "upgrade".


What do you mean? Error checking for C is very inconsistent (sentinel values and errno), so surely that's worse?


I think they mean that C coders have wrestled so much with the language's terrible error handling mechanics (or lack thereof) that it's probably one of the first thing they'd want fix when "upgrading" languages. I would tend to agree.


Upgrading error handling is great, but Go has the worst approach on doing it.


Worse than checked exceptions (Java)? How about unchecked exceptions?


In what way does C not ignore errors?


>I never view Go as a competitor to C

I think it is in the sense that you (usually) should never write network-facing C code.


I guarantee that almost any device you ever use in your day to day life has network-facing C code, if it has a network connection at all. What language do you think router/switch firmware is written in?


If you prefer: "there are very few cases in which it's legitimate to write network-facing C code".


It needs one, but I won't argue with the editor-morons there, who doubt any usefulness on any serious project. I can still remember how often nimrod was deleted, and then nim. and others I won't mention in their cleanup destruction rampages. We are the experts, not the Wikipedia editors, but they are treating it as their personal baby, and you should not interfere.


I'd be interested to hear out experience reports of people who tried to implement the same thing on C/Rust/Zig, like this one: https://github.com/isaachier/gbemu


Does it do tail call optimization?


it does, the same as how it does in clang with -O3. There's an open issue for adding guaranteed tail call semantics: https://github.com/ziglang/zig/issues/694

it's an important issue for the project because one of the things it is tackling is the problem of unbounded stack growth. zig wants to get to a state where upper bound stack size is known at compile-time.


That looks to me like an NP complete problem.


Not if the set of answers includes "couldn't determine an upper bound".

Many (but not all) problems become a whole lot easier if you restrict them to sane inputs. Determining an upper bound on stack size is especially interesting for, e.g., embedded software, and the intersection of "sane embedded program" and "program with complex stack size behavior" is rather small.


Specifically if features like function pointers are avoided, and tail call optimized recursive data structure functions are used. Hell for embedded uses if it can just determine it for some functions, and not others (like say anything involving dispatch) it would still be very useful.

From writing:

    TaskCreate(&some_func, 2048 /* Hope this is enough stack */)
To:

    TaskCreate(&some_func, sizeof(some_func))
would be amazing.


Original Fortran compilers were able to tell exactly how much stack any program was using. The answer was zero as all local variables were allocated statically and recursive calls were not supported.

But one does not need to go to such extreme as many useful code patterns can be proven to be stack bounded and for the rest of code the compiler may require runtime checks or some annotations like Rust’s unsafe.


Is there anything of particular interest to Rust? I'm know Rust is built on LLVM (for both good and ill but mostly good).


Rust updates LLVM more commonly than LLVM does stable releases, so it has been using a version somewhere between LLVM 6 and LLVM 7 for a while already.

I think one of the main areas that improved in LLVM 7 relative to LLVM 6 that Rust cares about is better support for WASM in LLD.


We also had to turn off noalias for &mut T for a while, due to bugs, and that was fixed fairly recently, and we turned it back on. I think that was post LLVM 6, but I can't quite remember.


Looks like it was as of LLVM 6: https://github.com/rust-lang/rust/pull/50744


Rust 1.30 (now in beta) has already moved to a newer snapshot of trunk, LLVM 8ish.


According to the "Update to LLVM 7" ticket for Rust (https://github.com/rust-lang/rust/issues/50543), many of the points of interest are in LLD, especially in WASM support:

    LLD has implemented --gc-sections for wasm
    LLD has implemented DWARF debugging information sections for wasm
    LLD has natively implemented concatenation of custom sections for wasm
    LLD has gained the ability to place the stack first in a wasm module's memory layout, reducing the corruption that happens on a stack overflow
    LLD option to not merge data segments
    Some SIMD generic float intrinsics have been fixed
    LLD optimizing further the size of release binaries
    LLD supports @-file syntax
Other things which mention LLVM 7 in the Rust bug tracker:

https://github.com/rust-lang/rust/issues/52457 "use funnel shift intrinsic to implement rotates"

https://github.com/rust-lang/rust/issues/52323 "emscripten already supports the wasm backend in LLVM7, so we should move the current emscripten-llvm from version 4.0 to that, so that we can remove all the workarounds lying around for LLVM < 5.0 (LLVM 5 is the new minimum supported version)."

https://github.com/rust-lang/rust/issues/49740 "Use separate memcpy source/dest alignments in LLVM 7"

https://github.com/rust-lang/rust/issues/51872 "Simplify u128->f32 casts thanks to LLVM r334777"

edit to add: rustc stable 1.29.0 already uses LLVM 7, and nightly is already using LLVM 8; rustc frequently switches to the new LLVM release when it is branched, it doesn't wait until it is released:

  $ rustc +stable --version --verbose
  rustc 1.29.0 (aa3ca1994 2018-09-11)
  binary: rustc
  commit-hash: aa3ca1994904f2e056679fce1f185db8c7ed2703
  commit-date: 2018-09-11
  host: x86_64-apple-darwin
  release: 1.29.0
  LLVM version: 7.0
  $ rustc +nightly --version --verbose
  rustc 1.30.0-nightly (79fcc58b2 2018-09-18)
  binary: rustc
  commit-hash: 79fcc58b24d85743d025fd880fca55748662ed3e
  commit-date: 2018-09-18
  host: x86_64-apple-darwin
  release: 1.30.0-nightly
  LLVM version: 8.0


Too bad RISC-V didn't manage to be included yet. Even GCC already supports it. A rare case when LLVM behind GCC on architecture support.


Rust has been able to target RISC-V for half a year, so LLVM must have been able to target it for even longer...


IIRC it has been considered experimental since version 6. Many people were hoping it would be upgraded to fully supported in version 7.


rare? GCC supports far more architectures than LLVM


Is there a reason that the libraries have been renamed from 7.0 to 7?


LLVM adopted a new version scheme which made the minor version redundant. IIRC someone suggested to remove it and nobody objected.

http://blog.llvm.org/2016/12/llvms-new-versioning-scheme.htm...


Does this page explain the parent's question? It specifically mentions keeping the minor version more than once.


Building MESA I had to change my script from "6.0" to "7" indeed.


Does anyone know where the re-licensing effort stands?



What are the main new features and why are they important? E.g. what's "function multiversioning"?



Read the release notes and you'll find out. You'll also find out function multi-versioning isn't even mentioned!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: