Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If your threat model includes pervasive spying by multiple nation states, and being grabbed in the night by black helicopters, it seems unlikely you'll be overly concerned about them precisely inserting at least 30 of your photos into multiple CSAM databases and also co-ercing Apple's manual review to get you reported to NCMEC.


I don't think people are worried about multiple nation states framing them with CSAM photos - they're worried about multiple nation states in an intelligence collaboration poisoning both sets of hash lists with non-CSAM material, so that there is an intersection that makes it onto the device.

There is still that Apple human reviewer once the threshold has passed. What I would love to ask Apple is - what happens if/when their reviewers start noticing political material, religious material, etc. is being flagged on a consistent basis, thereby insinuating that the hash list has been poisoned. What's their play at that point?


The document states that incorrectly flagged items are forwarded to engineering for analysis. Given their target false-positive rate (1 in 3 trillion, was it?) it seems likely that engineering would very carefully analyze a rush of false positives.


I think you have this backwards.

Given that nations already "grabbing in the night with black helicopters" (semantically, at least), and do so with impunity, it doesn't seem much of a stretch to imagine they'd potentially set someone up using this much milder sort of approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: