Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If what you’re trying to do is to publish prepared images of yourself, that won’t be facially recognized as you, then the answer is “not very much at all actually” — see https://sandlab.cs.uchicago.edu/fawkes/. Adversarially prepared images can still look entirely like you, with all the facial-recognition-busting data being encoded at an almost-steganographic level vs our regular human perception.


My understanding is that this (interesting) project has been abandoned, and since then, the face recognition models have been train to defend against it.


Very likely correct in the literal sense (you shouldn’t rely on the published software); but I believe the approach it uses is still relevant / generalizable. I.e. you can take whatever the current state-of-the-art facial recognition model is, and follow the steps in their paper to produce an adversarial image cloaker that will fool that model while being minimally perceptually obvious to a human.

(As the models get better, the produced cloaker retains its ability to fool the model, while the “minimally perceptually obvious to a human” property is what gets sacrificed — even their 2022 version of the software started to do slightly-evident things like visibly increasing the contour of a person’s nose.)


Do you know if this is still being worked on? The last "News" post from the link was 2022. Looks interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: