Hacker Newsnew | past | comments | ask | show | jobs | submit | alilleybrinker's commentslogin

Wired should retract this homophobic article.

It takes an issue of people in power abusing that power, and ties it to their sexuality, as if the men abuse their power because they’re gay, or as if straight men never do similarly.

Identifying abusive power structures is good, but writing about it in a way that centers the sexuality of the participants has the effect of demonizing a whole group of people unfairly.

I am appalled that Wired published this.


You’re drawing the exact wrong conclusion. Building an old boys network was acknowledged by society as a problem. If I, as a straight white male, started a men’s club, excluded women, and conducted my company’s business there, I would (rightfully) be exposed to a claim of discrimination.

The behavior is the problem. The exclusionary nature of these networks happens to be illegal in many US states, as sexual identity is a protected class. Doing the same nonsense at the Harvard club is equally noxious, not not illegal.

There’s a number of very significant, very problematic power brokers wielding authority in tech companies now. The fact that a significant part of the cohort is gay is irrelevant - the fact they have a clique that is insular and possibly corrupt is. That commonality is no less relevant than the PayPal Mafia.


> If I, as a straight white male, started a men’s club, excluded women, and conducted my company’s business there, I would (rightfully) be exposed to a claim of discrimination.

No you wouldn't that's not how it works.


Oh come on. I'm a gay guy. This is exactly how gay guys are. Reality is not homophobic.

I’m a straight guy, and this seemed obvious to me.

For what it’s worth, I’ve never felt like I was excluded because of those sort of thing. To be fair, I’m nowhere near the physical fitness/attractiveness standard described here, so I guess it’s possible that’s biased my experience.


In situations like this I appreciate that Rust has a culture of semantic precision [1] and while this kind of API-clarification is painful in the short-term, I think it will be worth it for Linux.

[1]: https://www.alilleybrinker.com/mini/rusts-culture-of-semanti...


For the disjoint field issues raised, it’s not that the borrow checker can’t “reason across functions,” it’s that the field borrows are done through getter functions which themselves borrow the whole struct mutably. This could be avoided by making the fields public so they can be referenced directly, or if the fields needs to be passed to other functions, just pass the the field references rather than passing the whole struct.

There are open ideas for how to handle “view types” that express that you’re only borrowing specific fields of a struct, including Self, but they’re an ergonomic improvement, not a semantic power improvement.


> For the disjoint field issues raised, it’s not that the borrow checker can’t “reason across functions,” it’s that the field borrows are done through getter functions which themselves borrow the whole struct mutably

Right, and even more to the point, there's another important property of Rust at play here: a function's signature should be the only thing necessary to typecheck the program; changes in the body of a function should not cause a caller to fail. This is why you can't infer types in function signatures and a variety of other restrictions.


Exactly. We've talked about fixing this, but doing so without breaking this encapsulation would require being able to declare something like (syntax is illustrative only) `&mut [set1] self` and `&mut [set2] self`, where `set1` and `set2` are defined as non-overlapping sets of fields in the definition of the type. (A type with private fields could declare semantic non-overlapping subsets without actually exposing which fields those subsets consist of.)


You could it in a more limited fashion by allowing fields of a struct to be declared "const", which would have similar semantics to Java's final. If you add the ability to return const references, you get the ability to have read only and mutable references to stuff within a struct co-exist.

For example, this won't compile:

    struct Something { z: usize }
    struct Foo<'a> { x: usize, y: &'a Something }
    impl<'a> Foo<'a> {
        fn bar(&mut self) -> &Something
        { let something = self.bar(); self.x += something.z; something }
    }
But if you could tell the borrow checker the mutable borrow of self can never modify z, then it would be safe. This would achieve that:

    struct Something { z: usize }
    struct Foo<'a> { x: usize, y: &'a const Something }
    impl<'a> Foo<'a> {
        fn bar(&mut self) -> &const Something
        { let something = self.bar(); self.x += something.z; something }
    }
I've now had several instances where they would have let me win a battle with the borrow checker succinctly rather than the long work around I was forced to adopt. Const struct members allow you implement read only fields with having to hide them, and provide getters is icing on the cake.



This seems to be a golden rule of many languages? `return 3` in a function with a signature that says it's going to return a string is going to fail in a lot of places, especially once you exclude bolted-on-after-the-fact type hinting like what Python has.

It's easier to "abuse" in some languages with casts, and of course borrow checking is not common, but it also seems like just "typed function signatures 101".

Are there common exceptions to this out there, where you can call something that says it takes or returns one type but get back or send something entirely different?


Many functional and ML-based languages, such as Haskell, OCaml, F#, etc. allow the signature of a function to be inferred, and so a change in the implementation of a function can change the signature.


In C++, the signature of a function template doesn't necessarily tell you what types you can successfully call it with, nor what the return type is.

Much analysis is delayed until all templates are instantiated, with famously terrible consequences for error messages, compile times, and tools like IDEs and linters.

By contrast, rust's monomorphization achieves many of the same goals, but is less of a headache to use because once the signature is satisfied, codegen isn't allowed to fail.


> In C++, the signature of a function template doesn't necessarily tell you what types you can successfully call it with, nor what the return type is.

That's the whole point of Concepts, though.


Concepts are basically a half solution - they check that a type has some set of properties, but they don't check that the implementation only uses those properties. As a result, even with concepts you can't know what types will work in a template without looking at the implementation as well.

Example [0]:

    #include <concepts>

    template<typename T>
    concept fooable = requires(T t) {
        { t.foo() } -> std::same_as<int>;
    };

    struct only_foo {
        int foo();
    };

    struct foo_and_bar {
        int foo();
        int bar();
    };

    template<fooable T>
    int do_foo_bar(T t) {
        t.bar(); // Compiles despite fooable not specifying the presence of bar()
        return t.foo();
    }

    // Succeeds despite fooable only requiring foo()
    template int do_foo_bar<foo_and_bar>(foo_and_bar t);

    // Fails even though only_foo satisfies fooable
    template int do_foo_bar<only_foo>(only_foo t);
[0]: https://cpp.godbolt.org/z/jh6vMnajj


> they check that a type has some set of properties, but they don't check that the implementation only uses those properties.

I'd say that's a mistake of the person who wrote the template then.

Also, there are Concepts where you absolutely know which types are allowed, e.g. std::same_as, std::integral, std::floating_point, etc.


> I'd say that's a mistake of the person who wrote the template then.

The fact that it's possible to make that mistake is basically the point! If "the whole point of concepts" were to "tell you what types you can successfully call it with" then that kind of mistake should not be possible.

It's true that there are certain cases where you know the full set of types you can use, but I'd argue that those are the less interesting/useful cases, anyways.


My interpretation of the post is that the rule is deeper than that. This is the most important part:

> Here is the most famous implication of this rule: Rust does not infer function signatures. If it did, changing the body of the function would change its signature. While this is convenient in the small, it has massive ramifications.

Many languages violate this. As another commenter mentioned, C++ templates are one example. Rust even violates it a little - lifetime variance is inferred, not explicitly stated.


Lifetimes for a function signature in Rust are never inferred from the function code. Rather Rust has implicit lifetime specs with straightforward rules to recover the explicit full signature.


I was speaking about variance specifically. They are not inferred from function bodies, but I think it's fair to say that it's a soft violation of the golden rule because variance has a lot of spooky action at a distance (changing a struct definition can change its variance requirements, which then has ripple effects over all type signatures that mention that struct)


> Are there common exceptions to this out there, where you can call something that says it takes or returns one type but get back or send something entirely different?

I would personally consider null in Java to be an exception to this.


There are languages with full inference that break this rule.

Moreover, this rule is more important for Rust than other languages because Rust makes a lot of constraints visible in function signatures.

But the most important purpose of the rule is communicating that this is a deliberate design decision and a desireable property of code. Unfortunately, there's an overwhelming lack of taste and knowledge when it comes to language design, often coming from the more academic types. The prevailing tasteless idea is that "more is better" and therefore "more type inference is better", so surely full type inference is just better than the "limited" inference Rust does! Bleh.


It's super easy to demonstrate your point with the first example the article gives as well; instead of separate methods, nothing prevents defining a method `fn x_y_mut(&mut self) -> (&mut f64, &mut 64)` to return both and use that in place of separate methods, and everything works! This obviously doesn't scale super well, but it's also not all that common to need to structure this way in the first place.


There's also the Common Weakness Enumeration (CWE), a long-running taxonomy of software weaknesses (meaning types of bugs).

https://cwe.mitre.org/


The project maintainers had to both:

1) Decide to use the highly risky `pull_request_target` Actions trigger instead of the much safer `pull_request` trigger, 2) include in their Actions a script, executing in an environment with write access to the repo and access to repository secrets, which executes untrusted input (the branch name).


The repository maintainers are running actions for PRs with the `pull_request_target` trigger, which gives full access to target repository secrets with write permissions. It's very explicitly documented as dangerous to do this. To mitigate the risk, `pull_request_target` actions run on the state of the target branch, not the source branch, but in this case because the target branch has this script which executes code influenced by an untrusted data source (the branch name), you get this vulnerability.


Alternatively, Rust's cell types are proof that you usually don't need mutable aliasing, and you can have it at hand when you need it while reaping the benefits of stronger static guarantees without it most of the time.


Not everything will be rewritten in Rust. I've broken down the arguments for why this is, and why it's a good thing, elsewhere [1].

Google's recent analysis on their own experiences transitioning toward memory safety provide even more evidence that you don't need to fully transition to get strong safety benefits. They incentivized moving new code to memory safe languages, and continued working to actively assure the existing memory unsafe code they had. In practice, they found that vulnerability density in a stable codebase decays exponentially as you continue to fix bugs. So you can reap the benefits of built-in memory safety for new code while driving down latent memory unsafety in existing code to great effect. [2]

[1]: https://www.alilleybrinker.com/blog/cpp-must-become-safer/

[2]: https://security.googleblog.com/2024/09/eliminating-memory-s...


Nah. The idea that sustained bugfixing could occur on a project that was not undergoing active development is purely wishful thinking, as is the idea that a project could continue to provide useful functionality without vulnerabilities becoming newly exposed. And the idea of a meaningfully safer C++ is something that has been tried and failed for 20+ years.

Eventually everything will be rewritten in Rust or successors thereof. It's the only approach that works, and the only approach that can work, and as the cost of bugs continues to increase, continuing to use memory-unsafe code will cease to be a viable option.


> The idea that sustained bugfixing could occur on a project that was not undergoing active development is purely wishful thinking

yet the idea that a project no longer actively developed will be rewritten in rust is not?


> yet the idea that a project no longer actively developed will be rewritten in rust is not?

Rewriting it in Rust while continuing to actively develop the project is a lot more plausible than keeping it in C++ and being able to "maintain a stable codebase" but somehow still fix bugs.

(Keeping it in C++ and continuing active development is plausible, but means the project will continue to have major vulnerabilities)


I'm not convinced. Rust is nice, but every time I think I should write this new code in Rust I discover it needs to interoperate with some C++ code. How to I work with std::vector<std::string> in rust - it isn't impossible but it isn't easy (and often requires copying data from C++ types to Rust types and back). How do I call a C++ virtual function from Rust?

The above issue is why my code is nearly all C++ - C++ was the best choice we had 15 years ago and mixing languages is hard unless you limit yourself to C (unreasonably simple IMO). D is the only language I'm aware of that has a good C++ interoperability story (I haven't worked with D so I don't know how it works in practice). Rust is really interesting, but it is hard to go from finishing a "hello world" tutorial in Rust to putting Rust in a multi-million line C++ program.


Rust/C++ interop is in fact complex and not obviously worthwhile - some of the underlying mechanisms (like the whole deal with "pinned" objects in Rust) are very much being worked on. It's easier to just keep the shared interface to plain C.


Read I should keep writing C++ code in my project instead of trying to add Rust for new code/features.

I'm not happy with my situation, but I need a good way out. Plain C interfaces are terrible, C++ for all the warts is much better (std::string has a length so no need for strlen all over)


The idea is to keep it in C++ and do new development in an hypothetical Safe C++. That would ideally be significantly simpler than interface with rust or rewrite.

There is of course the "small" matter that Safe C++ doesn't exist yet, but Google analysis showing that requiring only new code to be safe is good enough, is a strong reason for developing a Safe C++.


Safe C++ does exist today: it’s implemented in Circle. You can try it out on godbolt right now.


Thanks! I have been putting off playing with rust lifetimes. I guess now I have no excuses.


> Nah.

I know it's intended just to express disagreement, but this comes across as extremely dismissive (to me, anyway).


> Not everything will be rewritten in Rust.

Yeah, but it's also not going to be rewritten in safe C++.


Why not? C++ has evolved over the years, and every C++ project I have worked on, we've adopted new features that make the language safer or clearer as they are supported by the compilers we target. It doesn't get applied to the entire codebase overnight, but all new code uses these features, refactors adopt them as much as possible, and classes of bugs found by static code scanning cause them to be adopted sprinkled through the rest of the code. Our C++ software is more stable than it has ever been because of it.

Meanwhile, throwing everything away and rewriting it from scratch in another language has never been an option for any of those projects. Furthermore, even when there has been interest and buy-in to incrementally move to Rust in principle, in practice most of the time we evaluate using Rust for new features, the amount of existing code it must touch and the difficulty integrating Rust and C++ meant that we usually ended up using C++ instead.

If features of Circle C++ were standardized, or at least stabilized with wider support, we would certainly start adopting them as well.


What I'm really hoping is that https://github.com/google/crubit eventually gets good enough to facilitate incremental migration of brownfield C++ codebases to Rust. That seems like it would address this concern.


You might consider experimenting with the scpptool-enforced safe subset of C++ (my project). It should be even less disruptive.

[1] https://github.com/duneroadrunner/scpptool


There’s likely some amount of code which would not be rewritten into Rust but which would be rewritten into safe C++. Migrating to a whole new language is a much bigger lift than updating the compiler you’re already using and then modifying code to use things the newer compiler supports. Projects do the latter all the time.


The point is that it doesn't need to. According to google, making sure that new code is safe is good enough.


In theory it could be auto-converted to a safe subset of C++ [1]. In theory it could be done at build-time, like the sanitizers.

[1] https://github.com/duneroadrunner/SaferCPlusPlus-AutoTransla...


The article makes the particularly good point that you generally can’t effectively add new inferences without constraining optionality in code somehow. Put another way, you can’t draw new conclusions without new available assumptions.

In Sean’s “Safe C++” proposal, he extends C++ to enable new code to embed new assumptions, then subsets that extension to permit drawing new conclusions for safety by eliminating code that would violate the path to those safety conclusions.


It ought to be easier to get a blank slate of a small device with some compute power and a screen, like the Kindle here, without having to jailbreak something.


There is one, it's called the Raspberry Pi ecosystem, but due to the small volume and the target audience largely being not-my-own-money (think educational institutions), the price is quite detached from the production cost.


I think more of the issue might be the eink screens. As far as I can tell, there just aren't 5+ inch eink screens for cheap.


yes they are: https://www.waveshare.com/product/displays/e-paper.htm?___SI...

A 4.37inch E-Paper in 3 colors is $24, problem is need you to program yourself (they have code sample in python, for raspberry pi), and you need a raspberry pi, case, cables, etc.

Also, these cheap epaper displays are, of course, of lower quality (slower, lower resolution) than an kindle display.


They jump up in price pretty quickly as size goes up. The cheapest 5+ inch display I found at your link was over $40, and it's about 100PPI. It's certainly not prohibitive, but certainly priced high compared to "just jailbreak a kindle" for any remotely kindle-comparable display, right? (remotely comparable in size and resolution)


I imagine the Kindle is sold as a loss leader, plus whatever economies of scale/negotiating Amazon does pushes the price down heavily vs buying a single unit from an electronics retailer


They look kind of cool, and now I'm trying to come up with a project such that I can justify buying one.


Arduino-compatible devices based on ESP32 are plenty powerful enough at a fraction of the cost.


CPU is indeed beefy for a small wi-fi chip, but the small RAM hurts. Yeah, there's QSPI PSRAM but the bandwidth is lacking as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: