Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The server could partially mitigate such an attack, at the cost of a bit of bandwidth, by appending junk to round file sizes up to the nearest multiple of, say, 1MB. Not something I envision your average site doing, but a site like Cryptome...


I’ve been thinking about this too. It would be great if it would suffice with small amounts of junk. But that might just be wishful thinking from my side.

Apparently some folks at Stanford/Microsoft researched this problem already in ’02: http://infolab.stanford.edu/~qsun/research/identification.pd...


Cryptome also has the advantage of mostly serving individual large files, rather than HTML/CSS/JS with a lot of bursty dependent requests.

The exponential rather than linear padding suggested by that paper sounds like a good idea, as it better reflects the typical distribution of file sizes.

Small amounts of junk might be significantly more effective than nothing. I had to remind myself this is not like a crypto timing attack where the attacker usually gets to retry the operation as many times as they want, making even large random padding potentially susceptible to statistical analysis; a user will usually download a particular document from a server at most a few times. However, these days I'd say the bandwidth of good Internet connections is high enough compared to typical single-digit-MB PDF sizes that relatively large amounts of junk shouldn't inconvenience users too much; I just tested and got ~2.2MB/s from Cryptome, which is not terribly high, but definitely high enough that, say, doubling a 5MB PDF to 10MB would be no big deal.

For the user, anyway. I have no idea what Cryptome's bandwidth costs look like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: