Hacker Newsnew | past | comments | ask | show | jobs | submit | tomn's commentslogin

Bullets are a bad example because they have multiple properties which makes them much harder to localise than many other sounds.

I'm pretty sure most people can localise a vehicle emitting broadband noise (engine or white reversing sound) in the conditions that matter.


> They have to be loud enough to be heard through hearing protection.

It's kind of a nit-pick, but this is not really true.

Very approximately, you will perceive a sound if it is above your threshold of hearing, and also not masked by other sounds.

If you're wearing the best ear defenders which attenuate all sounds by about 30dB, and you assume your threshold of hearing is 10dBSPL (conservative), any sound above 40dBSPL is above the threshold of hearing. That's the level of a quiet conversation.

And because your ear defenders attenuate all sounds, masking is not really affected -- the sounds which would be masking the reversing beepers are also quieter.

There are nuances of course (hearing damage, and all the complicated effects that wearing ear defenders cause), but none of them are to the point that loud reversing noises are required because of hearing protection -- they are required to be heard over all the other loud noises on a construction site.

> The utility of having a backup beeper or any noise making device on that site is thus zero.

The inverse square law says otherwise; on site the distances will be much more apparent.


looking around a bit, it's used as an example in the documentation:

https://github.com/haiku/haiku/blob/7d07c4bc739dbf90159a5c02...

This is actually a great reason to keep it around; it's as simple as possible, and nothing uses it so it's easy to find the relevant bits of code.


Yeah, it's certainly doable, just a bit tricky because the spoil-board is not attached to the base, and is replaced nearly every time. It also needs at least one extra tool set-up.

If I needed a lot of double-sided boards it would be worth optimising this, but I don't really, single-sided (or the second side being 100% ground) is generally sufficient.


The simple contextlib.contextmanager example doesn't really sell the benefits.

The main one is that it makes error handling and clean-up simpler, because you can just wrap the yield with a normal try/catch/finally, whereas to do this with __enter__ and __exit__ you have to work out what to do with the exception information in __exit__, which is easy to get wrong:

https://docs.python.org/3/reference/datamodel.html#object.__...

Suppressing or passing the exception is also mysterious, whereas with contextlib you just raise it as normal.

Another is that it makes managing state more obvious. If data is passed into the context manager, and needs to be saved between __enter__ and __exit__, that ends up in instance variables, whereas with contextlib you just use function parameters and local variables.

Finally, it makes it much easier to use other context managers, which also makes it look more like normal code.

Here's a more real-world-like example in both styles:

https://gist.github.com/tomjnixon/e84c9254ab6d00542a22b7d799...

I think the first is much more obvious.

You can describe it in english as "open a file, then try to write a header, run the code inside the with statement, write a footer, and if anything fails truncate the file and pass the exception to the caller". This maps exactly to the lines in the contextlib version, whereas this logic is all spread out in the other one.

It's also more correct, as the file will be closed if any operation on it fails -- you'd need to add two more try/catch/finally blocks to the second example to make it as good.


I get why people like them, but they make way less sense when you work out the capacity of an equivalent weight (not to mention cost) of lithium cells.

It's easy to get to about 90Wh, which will run a dynamo-powered light for 30 hours on max (most dynamos seem to be rated 3W).

There are definitely cases where it makes sense, and not having to keep batteries charged is nice, it's just easy to miss how good batteries are these days.


Not having to take the light off the bike and charge it and then forget to take it back to the bike, not to mention forgetting charging it and finding out when it's dark, is completely worth having a dynamo.


I live in Tokyo, and only drive in the city center. 90% of me having a light is the legal requirement of having so, I virtually never need it since the streets I usually ride are well-lit. The remainder 10% is that I like the solid feel of the bike overall and felt sad for the integrated light not to work TBH.


When you are riding the bike in the city, the light is not for you to see things, it is for others to see and notice you.


Can't emphasise that enough. Especially if you're into black clothing and have a black bike.

"This showed that for cars DRL reduces the number of daytime injury crashes by 3-12%. The effect on fatal crashes can be estimated as somewhat greater (-15%)."

https://swov.nl/system/files/publication-downloads/fs_drl_ar...

This is about cars/motorcycles and daytime, but it certainly applies to any moving vehicle at any time...

When driving, I love those bicyclists that have a blinking rear light btw. Can't overlook them.


In my experience engaging a dynamo is worth one switch of the gear on a 7 gear cassette. I accept the tradeoff of having to pocket the light.


Not with a hub-type one. These work magnetically and you basically obly lose the 3-5W of power they produce.


I want a 25w+ one that normally engages while braking, with a capacitor for non-braking times


I don't even notice a difference.


A spare battery in your saddle-pack solves most of those problems.

If you're worried about being without light, a (typical) dynamo system is more complicated and exposed than a battery system, so will be more prone to failure.


I suppose you’re a casual cyclist and you don’t commute on a daily basis.

If you commute on a daily basis, a hub dynamo and light system is a bliss. Just hop on the bike and go. I have used bikes with Shimano, SP and Son for thousands of kms in all kind of weather and never really experienced a fault. It’s as simple as car lights - you just take them for granted.

With battery powered lights you need to take them off and put them back; recharge them; remember to bring them with you and not lose them. A spare battery pack is not enough (front and rear) and may not work during cycling (not all lights can be charged while turned on). And, low quality battery powered lights tend to quickly break (2-3 years) while I now realize one of my b+m systems is 10y old already. Good battery powered lights will probably last more, but they’re as expensive as dynamo powered ones.

So yeah, battery is ok and cheap for casual cycling, but very suboptimal if you want reliable lights every day throughout the year.


You're comparing a hub dynamo with cheap low-capacity rechargeable lights.

Rechargeable lights from the usual suspects are generally not good, they are expensive for what they are, have low capacity, and don't have swappable standard-size batteries.

They make dynamo systems look like a good deal, but if typical battery-powered lights were even close to their theoretical optimum I think people would be much less enthusiastic about dynamos.


Good lights last more (I have a Blackburn lamp and it’s working) but it’s still less convenient than dynamo. You need to remove and remount it every time, with the risk of dropping or forgetting.

Ofc if you mount fixed battery-powered lights and you could just swap a usb c battery, maybe it would compete with dynamo. But an easy swappable battery would still ve easy to steal (unless it’s inside the frame with a lock or sth like that)


Typical non-hub dynamo lasts like 30 years parked outside, and nice ones cost like $10 on Amazon. You smack it and they start whining at you. They are only barely more complicated than a stew pan.

Hub dynamos seem a bit more fragile, with a wire extending into the lightbulb, but never heard reliability is a concern with it...


Why do you think hub dynamos are more fragile? And what do you mean about a lightbulb?


I mean, it does have like, a wire between the hub and the lamp body. I think parent comment mentioned something about fragile wires when I posted the reply, I should have quoted it.


    A spare battery in your saddle-pack 
And then you have to worry about recharging/replacing two batteries instead of one. Yay for progress!


Don't worry, it will be stolen in 1 day.


It's not about weight, it's about having the light work when you need it.

Ensuring the battery is not empty at the time you want to ride it and it is night is not always convenient

I'm talking about commute, not sports, here.


They make a ton of sense when you’re riding long distance and when you don’t have access to a charger.


another solution is to just cast the result to an uint8_t; with this, clang 19.1.0 gives the same assembly:

https://gcc.godbolt.org/z/E5oTW5eKe


Like @wffurr mentioned, this is indeed discussed in a footnote. I just added another remark to the same footnote:

"It's also debatable whether or not Clang's 'optimization' results in better codegen in most cases that you care about. The same optimization pass can backfire pretty easily, because it can go the other way around too. For example, if you assigned the `std::count_if()` result to a local `uint8_t` value, but then returned that value as a `uint64_t` from the function, then Clang will assume that you wanted a `uint64_t` accumulator all along, and thus generates the poor vectorization, not the efficient one."


I'm not sure how "it can go the other way around too" -- in that case (assigning to a uint8_t local variable), it seems like that particular optimisation is just not being applied.

Interestingly, if the local variable is "volatile uint8_t", the optimisation is applied. Perhaps with an uint8_t local variable and size_t return value, an earlier optimisation removes the cast to uint8_t, because it only has an effect when undefined behaviour has been triggered? It would certainly be interesting to investigate further.

In general I agree that being more explicit is better if you really care about performance. It would be great if languages provided more ways to specify this kind of thing. I tried using __builtin_expect to trigger this optimisation too, but no dice.

Anyway, thanks for the interesting article.


> I'm not sure how "it can go the other way around too" -- in that case (assigning to a uint8_t local variable), it seems like that particular optimisation is just not being applied.

So the case that you described has 2 layers. The internal std::count_if() layer, which has a 64-bit counter, and the 'return' layer of the count_even_values_v1() function, which has an 8-bit type. In this case, Clang propagates the 8-bit type from the 'return' layer all the way to the inner std::count_if() layer, which effectively means that you're requesting an 8-bit counter, and thus Clang generates the efficient vectorization.

However, say that you have the following 3 layers: (1) internal std::count_if() layer with a 64-bit counter; (2) local 8-bit variable layer, to which the std::count_if() result gets assigned; (3) 'return' layer with a 64-bit type. In this case the 64-bit type from layer 3 gets propagated to the inner std::count_if() layer, which will lead to a poor vectorization. Demo: https://godbolt.org/z/Eo13WKrK4 . So this downwards type-propagation from the outmost layer into the innermost layer doesn't guarantee optimality. In this case, the optimal propagation would've been from layer 2 down to layer 1 and up to layer 3.

Note: I'm not familiar with how the LLVM optimization pass does this exactly, so take this with a huge grain of salt. Perhaps it does indeed 'propagate' the outmost type to the innermost layer. Or perhaps the mere fact that there are more than 2 layers makes the optimization pass not happen at all. Either way, the end result is that the vectorization is poor.


I've had a look at what's going on in LLVM, and we're both a bit wrong :)

This optimisation is applied by AggressiveInstCombinePass, after the function has been completely inlined. In cases where it is applied, the i64 result of the count is truncated to i8, and this gets propagated to the counter.

In the case where the result is assigned to a local variable, an earlier pass (before inlining) turns a truncate (for the cast) followed by a zero extend (for the return) into an and with 0xff. This persists, and AggressiveInstCombinePass then doesn't propagate this to the counter.

I've posted some selected bits of LLVM IR here:

https://gist.github.com/tomjnixon/d205a56ffc18af499418965ab7...

These come from running clang with "-mllvm=-print-after-all" and grepping for "^define.*_Z20count_even_values_v1RKSt6vectorIhSaIhEE"

This is why i don't see this as an optimisation pass "backfiring" or "go[ing] the other way around" (well, except for the "trunc,extend->and" one which we weren't talking about). Rather, it's just an optimisation not being applied. That might just be a language thing.


Thanks for looking into it!

I modified the footnote to get rid of the misleading statements regarding the 'backfiring' of the optimization. :)


Wouldn't that violate the as-if rule? If you assign to a u8 in layer 2 then the compiler must truncate regardless of the widening of the value upon return. It can't just ignore the narrowing assignment.


At the very end there's a "movzx eax, dl", i.e. zero-extend the low 8 bits of the accumulated value.


Just to correct my own comment:

> with an uint8_t local variable and size_t return value, an earlier optimisation removes the cast to uint8_t, because it only has an effect when undefined behaviour has been triggered

In this case, there is no undefined behaviour, because a narrowing cast to an unsigned type is well-defined. So, this could never have been a good explanation.


I was hoping you could just provide an iterator_traits with a uint8_t difference type, but this is tied to the iterator type rather than specified separately, so you'd need some kind of iterator wrapper to do this.


Yeah, I thought about that too, but if you want to process more than 255 values this might not be valid, depending on the implementation of count_if.


Which is discussed in the post and doesn’t work in GCC.


Oh right, I didn't see it in a couple of passes (and searching for cast); for anyone else looking it's in the 3rd footnote. Thanks.


This is incoherent to me. Your complaints are about packaging, but the elixir wrapper doesn't deal with that in any way -- it just wraps UV, which you could use without elixir.

What am I missing?

Also, typically when people say things like

> Tell me, which combination of the 15+ virtual environments, dependency management and Python version managers

It means they have been trapped in a cycle of thinking "just one more tool will surely solve my problem", instead of realising that the tools _are_ the problem, and if you just use the official methods (virtualenv and pip from a stock python install), things mostly just work.


I agree. Python certainly had its speedbumps, but it's utterly manageable today and has been for years and years. It seems like people get hung up on there not being 1 official way to do things, but I think that's been great, too: the competition gave us nice things like Poetry and UV. The odds are slim that a Rust tool would've been accepted as the official Python.org-supplied system, but now we have it.

There are reasons to want something more featureful than plain pip. Even without them, pip+virtualenv has been completely usable for, what, 15 years now?


I've seen issues with pip + virtualenv (ssl lib issues, IIRC). I've always used those at minimum and have still run into problems. (I like to download random projects to try them out.) I've also seen issues with python projects silently becoming stale and not working, or python projects walking over other python projects because pip + virtualenv does NOT encompass all Python deps to the metal. This also doesn't mean you can have 2 commandline Python apps available in the same commandline environment, because PATH would have to prefer one or the other at some point.

Here's a question- If you don't touch a project in 1 year, do you expect it to still work, or not? If your answer is the latter, then we simply won't see eye-to-eye on this.


> mostly just work

that's not good enough. If I'm in the business of writing Python code, I (ideally) don't want to _also_ be in the business of working around Python design deficiencies. Either solve the problem definitively, or do not try to solve the problem at all, because the middle road just leads to endless headaches for people WHILE ALSO disincentivizing a better solution.

Node has better dependency management than Python- And that's really saying something.


I don't see why it should be so binary. I said it "mostly" just works because there are no packaging systems which do exactly what you want 100% of the time.

I've had plenty of problems with node, for example. You mentioned nix, which is much better, but also comes with tons of hard trade-offs.

If a packaging tool doesn't do what i wanted, but I can understand why, and ultimately the tool is not to blame, that's fine by me. The issues I can think of fit reasonably well within this scope:

- requirement version conflicts: packages are updated by different developers, so sometimes their requirements might not be compatible with each other. That's not pip's fault, and it tells you what the problem is so you can resolve it.

- code that's not compatible with updated packages: this is mainly down to requirement versions which are specified too loosely, and not the fault of pip. If you want to lock dependencies to exact versions (like node does by default) you can do this too (with requirements.txt). It's a bit harsh to blame pip for not doing this for you, it's like blaming npm for not committing your package.lock. It would be better if your average python developer was better at this.

- native library issues: some packages depend on you having specific libraries (and versions thereof) installed, and there's not much that pip can do about that. This is where your "ssl issues" come from. This is pretty common in python because it's used so much as "glue" between native libraries -- all the most fun packages are wrappers around native code. This has got a lot better in the past few years with manylinux wheels (which include native libraries). These require a lot of non-python-specific work to build, so i don't blame pip where they don't exist.

It's not perfect, but it's not a big enough deal to rant about or reject entirely if you would otherwise get a lot of value out of the ecosystem.


> If I'm in the business of writing Python code

The thing is, most people who are writing python code are not in the business of writing python code. They're students, scientists, people with the word "business" or "analyst" in their title. They have bigger fish to fry than learning a different language ecosystem.

It took 30 years to get them to switch from excel to python. I think it's unrealistic to expect that they're going to switch from python any time soon. So for better or worse, these are problems that we have to solve.


Real justice would be changing the laws and sentencing guidance (through a democratically legitimate process), and re-evaluating the sentences of everyone affected.

Whatever you think about the outcome in this case, it is the moral equivalent of vigilante justice. It is unfair to others convicted under the same regime, who don't happen to be libertarian icons who can be freed in exchange for a few grubby votes.


despite that, the employment rights bill is at least moving in the right direction


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: