Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Let's do an episode where you explain why this isn't as big a deal as we made it out to be! We're game.

Sure, sounds like a plan. In order to actually present a balanced view and not end up with another one-sided episode like the last one, you might want to get the researchers involved again too.

And to be clear: my biggest issue with the podcast was the amount of (literal) laughing and derision and general "oh look how stupid they are" obnoxious attitude, rather than the claims of the paper. Our only point of contention with the paper is over whether server-controlled group membership is as big a disaster as you claim it to be, given you have to verify users to avoid interception anyway, at which point malicious devices are clearly flagged so clients can take evasive action (e.g. big red warnings, or refuse to encrypt messages in that room, etc.). From a purely theoretical perspective: yes, it's a bug that a server can add ghost devices at all. From a practical perspective, please show me the attack where a server or HTTPS MITM can add ghost devices to a verified user without the malicious user and room turning bright red like a TLS cert warning.

Anyway, once we've landed the TOFU and client-controlled-membership work (which is already overdue) I'd be very happy to come on the show to explain how the mitigations work, where the issues came from in the first place, and why we think the paper seriously overclaimed (which is a shame, given the actual implementation vulnerabilities were great research).

Thanks for the invite! :)



I'm absolutely rooting for you doing an episode together. please listen to some other episodes before dismissing the content because of format - they're fantastic.

> whether server-controlled group membership is as big a disaster as you claim it to be, given you have to verify users to avoid interception anyway, at which point malicious devices are clearly flagged so clients can take evasive action (e.g. big red warnings, or refuse to encrypt messages in that room, etc.).

I really would love to see this addressed: the UE of crypto. In 2022 we should absolutely refrain from leaving the decision to the user w.r.t if it is safe or trustworthy. Users suck. Don't trust users to look at warnings unless you're intention is to cater to only those of us who still have a GPG FPR in their ~/.signature

It. Does. Not. Work.

"Web of Trust" was a great idea but failed to scale (when was the last time you received a gpg encrypted mail from anyone).

Signal served us well until SGX.fail (https://sgx.fail), and it's why something like Matrix deserves to succeed.

Systems get better under pressure (when questioned) so I really hope y'all get a chance to talk.


> In 2022 we should absolutely refrain from leaving the decision to the user w.r.t if it is safe or trustworthy. Users suck. Don't trust users to look at warnings

Yup, totally. Which is why we are addressing it. The current behaviour dates back to 2016, and reflects the transition from plaintext to e2ee matrix which happened between then and now - much as TLS browser warnings have evolved in the same timeframe, and eventually being replaced by HSTS and friends. It doesn’t mean that the previous behaviour is catastrophic though - just as browser warning semantics doesn’t kill TLS.


Hit us up! You can email me at the address in my profile, and David and Deirdre will work with y'all to figure out a schedule.


Will do. Just want to land at least the tofu work first, as fun as it’d be to sit there saying “coming soon” on loop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: