Anyone know if there is a public events feed/firehouse for npm ecosystem system? Similar to GitHub public events feed?
We, at ClickHouse, love big data and it would be super cool download and analyse patterns of all these data & provide some tooling to help with combatting this wide spread issue.
Is it just me who think this could have been prevented if npm admins put in some sort of cool off period to only allow new versions or packages to be downloaded after being published by "x" amount of hours? This way the npm maintainer would get notifications on their email and react immediately? And if it is urgent fix, perhaps there can be a process to allow npm admin to approve and bypass publication cool off period.
Disclaimer: I don't know enough of npm/nodejs community so I might be completely off the mark here
It would be fine if you could still manually specify those versions eg. npm i duckdb@1.3.3 installs 1.3.3 but duckdb@latest or duckdb@^1.3 stays on 1.3.2 until 1.3.3 is ~a week old.
Versions with a serious vulnerability should be deprecated by the maintainer which then warns you to use a newer version when installing. Yes if a npm account is compromised the attacker could deprecate everything except their malicious version but it would still significantly reduce the attack surface by requiring manual intervention vs the current npm install foo@latest -> you're fucked.
NPM could also flag releases that don't have a corresponding github tag (for packages that are hosted on github), most of these attacks are publishing directly to NPM without any git changes.
They could definitely add a maker-checker process (similar to code review) for new versions and make it a requirement for public projects with x number of downloads per week.
The could force release candidates that the package managers don't automatically update to, but let researchers analyse the packages before the real release.
I don't see mentioning of e2e encryption, that would be nice but I love the webrtc usage here!
Shameless plug: I built small file sharing tool with encryption in browser and added a "tunnel" feature to make it easier for sharing between personal devices : https://www.relaysecret.com/tunnel/
The aes256 key is derived from hashing the tunnel name but never sent back to backend as it is behind anchor tag and the tunnel name is derived from substring of this hash. It is quite fun to use and share files. The file never lives more than 10 days (bucket lifecycle) but user can reduce this to delete upon download and the code can easily be reviewed (back end is a single lambda function to generate signed url):)
It has a few "features" which allowed me to go through a repository quickly:
- It prompts user and recommend the hash, it also provides user the url to the current tag/action to double check the hash value matches and review the code if needed
- Once you accept a change, it will keep that in a json file so future exact vesion of the action will be pinned as well and won't be reprompted.
- It let you also ignore version tag for github actions coming from well-known, reputational organisation (like "actions" belong to github) - as you may want to keep updating them so you receive hotfix if something not backward compatible or security fixes.
This way i have full control of what to pin and what not and then this config file is stored in .github folder so i can go back, rerun it again and repin everything.
This is good, just bear in mind that if you put the hash of an external composite action and that action pulls on another one without a hash, you're still vulnerable on that transitive dependency.
Yea hence it prompts for you to check the first time but once you verify the hash for particular version of action, it would automatically apply the hash to that same version of action everywhere. Also you can reuse the same config for all other repos so it is only tedious the first time but after that it is pretty quick to apply to the rest of the org :)
The tool is indeed meant for semi-auto flow to ensure human eye looked at the action being used.
And this is how Chinese model will win in long term, perhaps... They will be trained on everything and anything without consequences and we will all use it because these models are smarter (except for area like Chinese history and geography). I don't have the right answer on what can be done here to protect copyright or rather contributing back to authors of a paper without all these millions dollar wasted in lawsuits.
There's no winning though. There's no real moat when it comes to AI remember. There will be tons of models of similar, squishy types of unique attributes (squishy meaning it works great sometimes and not other times, and that's just normal). And it will mostly be decided which to use based on cost and compliance.