Hacker Newsnew | past | comments | ask | show | jobs | submit | ebfe1's commentslogin

I will look into that, thanks for the recommendation!


TIL! Thank you!


Anyone know if there is a public events feed/firehouse for npm ecosystem system? Similar to GitHub public events feed?

We, at ClickHouse, love big data and it would be super cool download and analyse patterns of all these data & provide some tooling to help with combatting this wide spread issue.


I found there are many links from stepsecuritiy, socket.dev but aikido seems to have the most up to date information about this ongoing npm hack.


Is it just me who think this could have been prevented if npm admins put in some sort of cool off period to only allow new versions or packages to be downloaded after being published by "x" amount of hours? This way the npm maintainer would get notifications on their email and react immediately? And if it is urgent fix, perhaps there can be a process to allow npm admin to approve and bypass publication cool off period.

Disclaimer: I don't know enough of npm/nodejs community so I might be completely off the mark here


If I was forced to wait to download my own package updates I would simply stop using npm altogether and use something else.


It would be fine if you could still manually specify those versions eg. npm i duckdb@1.3.3 installs 1.3.3 but duckdb@latest or duckdb@^1.3 stays on 1.3.2 until 1.3.3 is ~a week old.

https://github.com/pnpm/pnpm/issues/9921


Except they'd have to have an override for when there's a zero day, at which point we're back where we started.


Versions with a serious vulnerability should be deprecated by the maintainer which then warns you to use a newer version when installing. Yes if a npm account is compromised the attacker could deprecate everything except their malicious version but it would still significantly reduce the attack surface by requiring manual intervention vs the current npm install foo@latest -> you're fucked.


Brilliantly simple, that would work for me!


It could be done like a rollout in % over time like app stores do.


NPM could also flag releases that don't have a corresponding github tag (for packages that are hosted on github), most of these attacks are publishing directly to NPM without any git changes.


I would love this for every dependency manager, and double extra bonus for "the tag NOW isn't the tag from when the dep was published"

But, this coming from GitHub, who believe that sliding "v1" tags on random action repos is how one ends up with https://news.ycombinator.com/item?id=43367987


They could definitely add a maker-checker process (similar to code review) for new versions and make it a requirement for public projects with x number of downloads per week.


The could force release candidates that the package managers don't automatically update to, but let researchers analyse the packages before the real release.


I don't see mentioning of e2e encryption, that would be nice but I love the webrtc usage here!

Shameless plug: I built small file sharing tool with encryption in browser and added a "tunnel" feature to make it easier for sharing between personal devices : https://www.relaysecret.com/tunnel/

The aes256 key is derived from hashing the tunnel name but never sent back to backend as it is behind anchor tag and the tunnel name is derived from substring of this hash. It is quite fun to use and share files. The file never lives more than 10 days (bucket lifecycle) but user can reduce this to delete upon download and the code can easily be reviewed (back end is a single lambda function to generate signed url):)


WebRTC connections are inherently end-to-end encrypted.

They use a self-signed certificate for DTLS-SRTP, and the fingerprint of that is sent over the signalling channel.


Honest question: So who gets this $1.38B? The user? Some company? The government/treasury?


After tj-actions hack, I put together a little tool to go through all of github actions in repository to replace them with commit hash of the version

https://github.com/santrancisco/pmw

It has a few "features" which allowed me to go through a repository quickly:

- It prompts user and recommend the hash, it also provides user the url to the current tag/action to double check the hash value matches and review the code if needed

- Once you accept a change, it will keep that in a json file so future exact vesion of the action will be pinned as well and won't be reprompted.

- It let you also ignore version tag for github actions coming from well-known, reputational organisation (like "actions" belong to github) - as you may want to keep updating them so you receive hotfix if something not backward compatible or security fixes.

This way i have full control of what to pin and what not and then this config file is stored in .github folder so i can go back, rerun it again and repin everything.


This is good, just bear in mind that if you put the hash of an external composite action and that action pulls on another one without a hash, you're still vulnerable on that transitive dependency.


oh damn - that is a great point! thanks matey!


I don't know if your tool already does this but it would be helpful if there is an option to output the version as a comment of the form

action@commit # semantic version

Makes it easy to quickly determine what version the hash corresponds to. Thanks.


Yeap - that is exactly what it does ;)

Example:

uses: ncipollo/release-action@440c8c1cb0ed28b9f43e4d1d670870f059653174 #v1.16.0

And for anything that previously had @master, it becomes the following with the hash on the day it was pinned with "master-{date}" as comment:

uses: ravsamhq/notify-slack-action@b69ef6dd56ba780991d8d48b61d94682c5b92d45 #master-2025-04-04


I've been using https://github.com/stacklok/frizbee to lock down to commit hash. I wonder how this tool compares to that.


Having control is good, but reading all the code yourself seems unrealistic. We need something like crev or cargo-vet.


Yea hence it prompts for you to check the first time but once you verify the hash for particular version of action, it would automatically apply the hash to that same version of action everywhere. Also you can reuse the same config for all other repos so it is only tedious the first time but after that it is pretty quick to apply to the rest of the org :)

The tool is indeed meant for semi-auto flow to ensure human eye looked at the action being used.


renovate can be configured to do that too :)


Do you have an example config?

Trying to get the same behavior with renovate :)



Ok ....where is the form so as an ex-whatsapp user, I can get a piece of that 167M pie? Oh... there isnt one... :)


And this is how Chinese model will win in long term, perhaps... They will be trained on everything and anything without consequences and we will all use it because these models are smarter (except for area like Chinese history and geography). I don't have the right answer on what can be done here to protect copyright or rather contributing back to authors of a paper without all these millions dollar wasted in lawsuits.


There's no winning though. There's no real moat when it comes to AI remember. There will be tons of models of similar, squishy types of unique attributes (squishy meaning it works great sometimes and not other times, and that's just normal). And it will mostly be decided which to use based on cost and compliance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: