> it's not obvious how we can trust that a rogue actor (like a foreign government) couldn't add non-CSAM hashes to the list to root out human rights advocates or political rivals. Apple has tried to mitigate this by requiring two countries to agree to add a file to the list, but the process for this seems opaque and ripe for abuse.
If the CCP says "put these hashes in your database or we will halt all iPhone sales in China", what do you think Apple is going to do? Is anyone so naive that they believe the CCP wouldn't deliver such an ultimatum? Apple's position seems to completely ignore recent Chinese history.
Related, the Indian Government (Telecom Department) bullied Apple into building an iOS feature for reporting phone calls and SMS by threatening to stop iPhone sales in India.
I'm so done. I'm sorry to dump a pointless rant like this on HN but... what the hell is going on these days? Nobody seriously seems to care about legitimate privacy concerns anymore. If I were in a position of power, like being CEO, CTO, or even just an engineer on the team at Apple that implemented this, I'd do EVERYTHING to make sure that my power is in check and that I'm not pushing a fundamentally harmful technology.
I just feel so lost and powerless these days, I don't know how much longer I can go on when every piece of technology I own is working against me - tools designed to serve a ruling class instead of the consumer. I don't like it one bit.
What is going on is that reality is slapping some techno-utopians in the face and they are shocked, shocked, that governments are more powerful that businesses.
That's not at all what the lefty geeks learned by reading Chomsky or what the righty geeks learned by reading Heinlein.
All along these people thought algorithms and protocols (e.g. bitcoin and TCP/IP) would somehow be a powerful force that would cause governments to fall on their knees and let people evade government control. After all, it's distributed! You can't stop it!
Well, that was all very foolish, because they mistook government uninterest in something for the equivalent of government being powerless to control it, and when governments did start taking an interest in something, it turns out that protocols and algorithms are no defense against the realities of political power. It is to the field of politics, and not the field of technology, that one must turn in order to increase collective freedoms. Individual freedom can be increased by obtaining money or making lots of friends, but collective freedom cannot be increased this way, it can only be increased by organizing and influencing government.
Bingo. I wish I could upvote this comment more. All the geeks get distracted by words like “cloud” or “virtual” and forget that all this stuff we depend on has a physical presence at some point in the real world. That physical presence necessitates humans interacting with other humans. Humans interacting with humans falls squarely in the “things governments poke their noses into” bucket. It’s like the early days of Napster when people were all hot for “peer to peer”, as if that tech was some magic that was going to make record labels and governments throw up their hands over copyrights.
Maybe we could make this framework future-proof by using blockchains? Somehow? Maybe it can use blockchains, or it can be stored on a blockchain, or maybe both at the same time. Surely that will help society in some nonspecific, ambiguous manner.
Remember the people who, decades after the invention of the Internet, kept on insisting that it was useless and only for porn addicts?
Remember the people who, after the invention of the phone, insisted that it was a nice trick but probably only useful for a few businessmen with dictation needs?
Yeah, they all had to change their tone at some point, under the shame of having been wrong for so long.
> All along these people thought algorithms and protocols (e.g. bitcoin and TCP/IP) would somehow be a powerful force that would cause governments to fall on their knees and let people evade government control. After all, it's distributed! You can't stop it!
But that's the underlying problem here. Apple isn't a standardized protocol or a distributed system. It's a monolithic chokepoint.
You can't do this with a PC. Dell and HP don't retain the ability to push software to hardware they don't own after they've already sold it and against the will of the person who does own it.
People pointed out that this would happen. Now it's happening. Qué sorpresa.
Dell ships laptops with tons of Dell software, as well as tons of third-party software. Do you really think that, if they wanted to, they couldn't just update one of those pieces of software to enable remote installs?
Hell, Dell has shipped more than one bug that allowed attackers administrator-level access or worse, I wouldn't put it past them to come up with some kind of asinine feature that not only lets them push new software/drivers/whatever to the machine, but lets attackers do so as well.
> All along these people thought algorithms and protocols (e.g. bitcoin and TCP/IP) would somehow be a powerful force that would cause governments to fall on their knees and let people evade government control. After all, it's distributed! You can't stop it!
The internet and its design and associated protocols were designed to work around external forces - a nuclear attack or natural disaster. It was never designed to be government-proof. People who thought that would be the case were being idealistic and naive.
If you want real change in the world, as you said, you have to affect the political world, which is an option available to any citizen or corporation who can spend millions on lobbyists.
There are many of us that DO care! Unfortunately, even though we are many, we are still a small minority among the general population, or probably even among software developers.
Convenience and fashion tend to trump security and principles for most people. (Oftentimes, I'm one of those people as well, though I try not to be. It's exhausting to be an activist 100% of the time. But let's keep at it!)
I'm as surprised as you are that a giant like Apple doesn't just tell them "go ahead, ban iPhones, see how popular they'll become" to someone as powerless as the government of India. It would be a huge free publicity campaign for them in the rest of the world while the public in India would either put pressure on their government or buy iPhones via import websites.
For additional fun, strike a deal with the #2 non government owned carrier in whichever country you do this to. Offer the iPhone at a special rate for a few months. They would kill the government telco while selling record numbers of phones with free publicity. And at the same time scare any other government into not trying this kind of stunt with Apple ever again.
I wonder how Apple's shareholders would react if the company threw away a market that was worth $1.8bn in revenue in 2020.
Then there's China; 17% of Apple's global revenue, $43.7bn. I don't think shareholders would much appreciate that.
> the public in India would either put pressure on their government or buy iPhones via import websites
The iPhone had 2.97% market share in India in April 2021, down from a high of 3.54% in June 2020. I don't think the people who wanted to buy iPhones but couldn't would be able toput any significant amount of political pressure on politicians.
Rich people would just import them from somewhere else like they always have before, and everyone else would switch to some available Android phone that had the modifications that the government wanted.
How much fun would that be for customers if India then decided to confiscate every iPhone it encounters within India (maybe excepting tourists, but maybe not)?
> I don't know how much longer I can go on when every piece of technology I own is working against me - tools designed to serve a ruling class instead of the consumer.
This made me think of the Butlerian Jihad in Dune:
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
I'd say the typical remedy that societies have adopted for these sorts of things is legislation, though regulatory capture[1] is an issue that blocks the way.
Buy Sony Xperia 10 ii, then buy an official SailfishOS package from shop.jolla.com and flash it over Android 11 - enjoy a polished mobile OS without snitches, which you fully control, a real pocket computer instead of a future pocket policeman.
Maybe I did. But what difference does it make? There's plenty of other instances where Apple has reportedly been bullied into action or inaction (being dissuaded from implementing E2EE for iCloud is one example). I've really just reached a breaking point and I'm sorry if logic does not apply.
I'm amazed on how much mis-information is spread. Featured we are talking about here is for reporting spam number which is a done by user not automatically. This is widely available in Android already.
Correct me if I'm wrong but this feature needs an app install from the app store.
They added a feature which is off by default and allows a user to select a supported installed app to use as a spam reporting app.
IMHO this is great, I wish more countries would enable this feature. Something like 95% of my phone calls are spam, to the point where I just don't answer the phone anymore unless they're in my contacts list. Users being able to actually report them as spam might actually result in this BS finally stopping.
I think few people are making the appropriate parallel. What we’re looking at is not necessarily government overreach, but fascism.
When the hell did it become Apple’s job to do this? Apple is not a branch of law enforcement. The government needs warrants for stuff like this. We are merging corporate and government interests here. Repeat after me, Apple is not supposed to be a branch of law enforcement.
It also says a lot about us, that we are beholden to a product. We have to ditch these products.
> When the hell did it become Apple’s job to do this?
Apple provided a pathway, however unintentionally, to greater power. And those in power used their existing authority to gather even more for themselves, as they always do.
Like drops flow into streams into rivers into oceans, power aggregates at the top until regime change spills it back to the ground.
I get about 10 every single fucking day, super annoying, and they are spoofing numbers too: I get calls from hotels and restaurants in my address book, yet they are not being called from there, I hear a series of clicks and then someone asks me about my auto insurance... The moment I hear clicks now, I just hang up, if I answer at all. I am ready to simply give up phones entirely. Fucking complete failure by the telecoms, their entire industry is a consumer failure.
Presumably Apple would be afraid that, say, the EU becomes suspicious, issues a court order to obtain the hashes, notices they cannot audit the CCP hashes, pointedly asks "what is this", becomes absolutely livid that their citizens are spied on by a country that is not them, fines Apple out the wazoo, then extradites whoever is responsible and puts them in prison. I mean, China's not the only player in this. Putting extra hashes to surveil Chinese citizens, yeah, they might do that, but it'd be suicide to put them anywhere else.
The db is encrypted and uploaded to user devices. If each country gets a different db, the payload will be different in each country, which does not make sense if it's all supposed to be CSAM. So Apple would likely just say "these were mandated by the US government for US citizens," punting the ball in their court, unless they are forbidden to say so, in which case they'll say nothing, but we all know what it means. That's when you know you should change phones and stop using all cloud services, because obviously all cloud services scan for the same thing.
On the flip side, though, at least Apple will have given us a canary. And that's why I don't think Apple will be asked to add these hashes: if the governments don't want their citizens to know what's being scanned server side, pushing the equivalent data to clients would tip their hand. They might just write Apple off as a loss and rely on Google, Facebook, etc.
I feel that it's the kind of scheme that requires too much cooperation from too many people and organizations with conflicting incentives. It's possible some countries would not want the hashes from certain other governments in the db at all. And then what? I may be wrong, but I also believe we can know how many hashes are in the db, which means that if it contains extra hashes from dozens of governments, it would become suspiciously large relative to how many CSAM images we know exist. Furthermore, in this scenario the db cannot be auditable, so the scheme falls apart as soon as some rogue judge decides to order an audit.
I honestly don't think Apple wants to deal with any of that crap and that they would rather silently can the system and do exactly what everybody else does than place themselves in the line of fire when their own unique trademark system is being abused.
Would they rather deal with the CCP shutting off iPhone sales in China? History has shown that the CCP is willing to do that if it comes down to it. (I remind you that at one time, Google was a primary search engine in China.)
It’s blinded in the cryptographic sense. It’s a specific term. I would go into detail, but <reasons>.
Suffice to say, unless you provide proof, I am reasonably confident there’s no way to verify the hash db doesn’t contain extra hashes other than the CSAM hashes provided by the US government. But I’ve been wrong many times before.
Well first of all, it's not provided by the US government. It's a non-profit, and Apple has already said they're going to look for another db from another nation and only included hashes that are the union of the two to prevent exactly this kind of attack.
If what you mean by blinded is that you don't know what the source image is for the hash, that's true. Otherwise Apple would just be putting a database of child porn on everyone's phones. You gotta find some kind of balance here.
What do you mean you can't verify it doesn't contain extra hashes? Meaning that Apple will say here are the hashes in your phone, but secretly will have extra hashes they're not telling you about? Not only is this the kind of thing that security researchers will quickly find, you're assuming a very sinister set of features from Apple that they'll only tell you half the story. If that were the case, then why offer the hashes at all? It's an extremely cynical take.
The reality is all of the complaints about this system went from this specific implementation, and then as details get revealed, it's now all about the future hypothetical situations. I'm personally concerned about future regulations, but those regulations could/would exist independently of this specific system. Further, Dropbox, Facebook, Microsoft, Google, etc all have user data unencrypted on their servers and are also just as vulnerable to said legislation. If the argument is this is searching your device, well the current implementation is its only searching what would be uploaded to a server instead. If you suggest that could change to anything on your device due to legislation, wouldn't that happen anyway? And then what is Google going to do... not follow the same laws? Both companies would have to implement new architectures and systems for complying.
I'm generally concerned about the future of privacy, but I think people (including myself initially) have gone too far in losing their minds.
At no point is anyone besides Apple able to view any NeuralHash hashes from the CSAM database. You can verify the database is the same on all iPhones, but you are not able to look at any of the hashes.
Right, but perhaps I'm not understanding what the complaint is here.
Is the issue that you want to take a given photo that you believe the CCP or whomever is scanning for, compute a NeuralHash, and then see if that's in the db? Or are you wanting to see if your db is different from other phone's db's? Because I think the later is the one that most people are concerned about.
Having just read the CSAM summary pointed to by a child comment here, I know have a better understanding of what you meant by blinded. But I don't think that changes any of my points.
There are many functions to which cryptographic blinding is applied, but they each rely upon multiple parties to compute the function in question. In that way, the input and output are blinded to a single party.
Yeah, but if they tell us they're doing that, then it's pretty obvious what they're up to. And if they don't tell us they're doing that, but do it anyway - then they have to perpetually pay every developer involved in that upgrade enough money to keep their mouth shut indefinitely - knowing that the developers know that APPLE knows how much they'd lose in fines if they got caught. Which is an unreasonably large liability, IMO.
It would be good, I think, if people read Apple's threat assessment before calling it "pretty trivial":
> • Database update transparency: it must not be possible to surreptitiously change the encrypted CSAM database that’s used by the process.
> • Database and software universality: it must not be possible to target specific accounts with a different encrypted CSAM database, or with different software performing the blinded matching.
I mean, you can argue that Apple's safeguards are insufficient etc., but at least acknowledge that Apple has thought about this, outlined some solutions, and considers it a manageable threat.
ETA:
> Since no remote updates of the database are possible, and since Apple distributes the same signed operating system image to all users worldwide, it is not possible – inadvertently or through coercion – for Apple to provide targeted users with a different CSAM database. This meets our database update transparency and database universality requirements.
> Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the encrypted database present on their device, and compare it to the expected root hash in the Knowledge Base article. That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.
> This approach enables third-party technical audits: an auditor can confirm that for any given root hash of the encrypted CSAM database in the Knowledge Base article or on a device, the database was generated only from an intersection of hashes from participating child safety organizations, with no additions, removals, or changes. Facilitating the audit does not require the child safety organization to provide any sensitive information like raw hashes or the source images used to generate the hashes – they must provide only a non-sensitive attestation of the full database that they sent to Apple.
That's a lovely technical solution! It will last right up until the CCP rubber-hoses[0] Apple with "add these hashes or stop selling iPhones in China".
Apple will cave. They've demonstrated the technology works. The CCP is paying attention. Look for this feature to be mandated across all phones soon. Pandora's box has been opened.
> extradites whoever is responsible and puts them in prison
If we lived in a world where the people who make these kind of decisions for companies were actually accountable in this way, life might be better in a lot of different ways. But sadly we do not.
> citizens are spied on by a country that is not them
I thought countries often have under the table agreements with one another to explicitly spy on each others citizens, since its illegal for the country to spy on its own citizens. It's illegal for the other country too, but it's a lot easier to turn a blind eye to it.
I thought it was already common knowledge that China puts in different hardware backdoors for computers destined to different countries. I remember a while back a news story where China accidentally shipped a box of phones backdoored for China into the US.
I think that you overestimate the EU reaction. Every few years we learn that our Europeans leaders and some citizens have been again spied by foreign powers, such as the US, and absolutely nothing ever happened.
A friend in the military told me years ago France was the number one hacker of the US gov. It goes both ways.
This may have shifted over time as China, Russia, NK, Iran increase their attacks, but it doesn’t diminish the fact that the EU is also hacking the US without repercussions.
The US is an ally and it is somewhat harder to punish a nation state than a company. Why would Apple take the risk? China can't exactly reveal that they are banning an American company for not spying on American citizens, and it's not clear what convincing pretext they could provide instead, so I don't think they would actually go through with a ban and Apple would probably just call their bluff.
If the CCP says “put this arbitrary software into your next iPhone software update or we will halt all iPhone sales in China,” what do you think Apple is going to do? Isn’t the answer to both questions the same?
If your wifi is enabled and the phone is attached to power, apple updates without any request. I've been trying not to update my iPhone, but recently I left the wifi on when charging and it updated.
It's a fair question, but I think the answer is no: the questions are not the same.
As much as Apple wants access to the Chinese market, it would (presumably) draw a line at some point where it would (presumably) have to choose between that market and the US market, if only because the latter is both its legal domicile and the source of most of its talent.
Version A: CCP wants to exploit the hash database, there are lots of ways to do that, bullying Apple is one, any other way gives Apple a "we are looking into it" excuse. "We must comply with local laws, but we will not change our software bla bla."
Version B: CCP wants to exploit iOS, only way to do it is to bully Apple, this forces Apple's hand and very possibly Apple moves production (not just sales) out of China because they no longer trust they will be offered "plausible deniability."
I'm sure there are lots of reasons for that absurd cash reserve, but my best guess is it's to cover the eventuality of B. above; Apple talking about that publicly would be tricky.
> very possibly Apple moves production (not just sales) out of China
As a matter of fact, I am not sure that would be possible: It might well be that no other country has the capacity (machines and labour) to churn out that many iPhones. Would be interesting to hear if anyone has insight on that. (Tim Cook presumably knows...)
This has been Apple's line for quite a while, but over the last ten years I can't believe Mr Cook has not come up with a Plan B, given that the volatility in US-China relations is much more likely to affect iPhones than most other Walmart goods.
I admit it's total speculation, but I think the massive cash reserves are for that: to weather a disruption in production facilities and move production to a more US-friendly location.
It apparently wasn't hard to "discover" the fact that this CSAM database can and will change over time. In fact, Apple explained this in detail as well as how they are attempting to avoid the problem of governments abusing the system. Are you suggesting that a different software update might be even easier to discover?
The CCP already runs iCloud themselves in country so this is a bit irrelevant. (Though I think this kind of capitulation to authoritarian countries is wrong, personally: https://zalberico.com/essay/2020/06/13/zoom-in-china.html)
This policy really needs to be compared to the status quo (unencrypted on cloud image scanning). When you compare in transit client side hash checks that allow for on cloud encryption and only occur when using the cloud it's hard to see an argument against it that makes sense given the current setup.
The abuse risk is no higher than the status quo and enabling encryption is a net win.
These scenarios sound rather like "the wrong side of airlock" stories[1]. Why would China go through an elaborate scheme with fake child-porn hashes, when it can already arrest these people on made-up charges, and simply tell Apple to provide the private key for their phones, so that they can read and insert whatever real/fake evidences they want?
Because they don't know who to arrest yet. The idea isn't to fabricate a charge, it's to locate people sharing politically sensitive images that the government hasn't already identified.
> Because they don't know who to arrest yet. The idea isn't to fabricate a charge, it's to locate people sharing politically sensitive images that the government hasn't already identified.
And maybe even identify avenues for sharing that they haven't already identified and monitored/controlled (e.g. some encrypted chat app they haven't blocked yet).
China does not really need Apple to do much. They already make installation of some apps mandatory by law. Also, some communication must be done with WeChat and so on. They have pretty good grip already.
> They already make installation of some apps mandatory by law. Also, some communication must be done with WeChat and so on.
Can you give some examples of this on the iPhone?
Also, it seems like on the spying front [1] (at least publicly, with Apple), they've preferred more "backdoor" ways to get access over more overt ones, so this scanning feature might encourage asks that they wouldn't have otherwise made.
[1] this contrasts with the censorship front, where they've been very overt
Agreed. If China can force Apple to do almost anything by threatening to ban iPhone sales, why bother with fake CSAM hashes? That just adds an extra step. It's not like the Chinese government needs to take pains to trick anyone about their attitude toward "subversive" material.
In both cases people are stretching to come up with hypothetical scenarios about how these systems could be abused by a government ("they could force Apple to insert non-CSAM hashes into their database" or "they could force Google to insert a backdoor into your app") while completely ignoring the elephant in the room: if a government wanted to do these things, they already have the power to do so.
If your concern is that a government might force Apple or Google to do X or pull product sales in their country, whether Apple performs on-device CSAM scanning vs scanning it on their servers, or whether Google signs your app vs you signing it doesn't materially change anything about that concern.
The outrage around this particular situation is even more confusing to me because you can opt out entirely by disabling iCloud Photos, and if you were already using iCloud Photos then the scanning was already happening on Apple's servers anyway, so the only actual change is that the scan now occurs before instead of after the upload.
Exactly. Apple can already ship literally any conceivable software to iPhones. Do people really think their plan was to sneak functionality into this update and then update the CSAM database later, and they would have gotten away with it if it weren't for the brilliant privacy advocates pointing out that this CSAM database could be changed over time? That's pretty ludicrous. If the Chinese government wanted to (and thought it had sufficient leverage over Apple), they could literally just tell Apple to issue a software update that streams all desired private data to Chinese government servers.
Not quite. Those are still ostensibly servers located in China but not directly controlled by the government (edit: apparently the hosting company is owned by Guizhou provincial government). But yes, this is precisely my point. Any slippery slope argument about Apple software on iPhones is equivalent to any conceivable slippery slope argument about Apple software on iPhones. If you're making one of these arguments, you're actually just arguing against Apple having the ability to issue software updates to iPhones (and by all means, make that argument!).
China's laws are such that there's no need for them to obtain a warrant for data housed on servers of Chinese companies. Not only do they not need a warrant but companies are required to facilitate their access. While the servers aren't controlled by the Chinese government, government law enforcement and intelligence agencies have essentially free access to that data.
> ostensibly servers located in China but not directly controlled by the government
"ostensibly" is the key word there. If the datacenter is physically located in China, then there's a CCP official on the board of the company that controls it.
So your argument boils down to since Apple can already install software without us knowing, we shouldn't worry about a new client-side system that makes it substantially easier for nation states to abuse? I don't find that argument the least bit compelling.
I’m not saying that we shouldn’t be concerned with Apple actually launching things that are bad. I’m saying we shouldn’t make arguments of the form “this isn’t bad yet, but they could change this later to make it bad.” Because obviously they can change anything later to be bad. If the system as currently described is a violation of privacy, or can be abused by governments, etc. then just make that argument.
Because Apple has already built that functionality, and it exists? What alternative dragnet currently exists to identify iOS users who possess certain images? This would be code reuse.
China or any government adding collisions would be to use Apple's system as a dragnet to find users possessing the offending images.
The way it would work is the government in question would submit legitimate CSAM but modified to produce a collision with a government target image. Looking at the raw image (or a derivative) a reviewer at Apple or ICMEC would see a CSAM image. The algorithm would see the anti-government image. So Apple scans Chinese (or whoever) citizens libraries, finds "CSAM" and reports them to ICMEC which then reports them to the government in question.
Every repressive government and some notionally liberal governments will eventually do this. It likely is already happening with existing PhotoDNA systems. The difference is that's being used by explicit sharing services where Apple's new system will search for any photo in a user's library regardless of it being "shared" explicitly.
> So Apple scans Chinese (or whoever) citizens libraries, finds "CSAM" and reports them to ICMEC which then reports them to the government in question.
If Apple finds that a particular hash is notorious for false positives, they can reject it / ask for a better one. And they’re not scanning your library; it’s a filter on upload to iCloud. The FUD surrounding this is getting ridiculous.
Look, I said it in another post, it is not Apple’s job to act as an arm of law enforcement. The same way it is not either of our jobs to be vigilante sheriffs and police the streets.
We’re talking about a company that makes phones and computers, and sells music and tv shows via the internet. Does that matter at all?
How about this. All car manufacturers must now wirelessly transmit when the driver of the car is speeding immediately. How about that?
Let’s just go all out and embed law enforcement into all private companies.
This is fascism, the merging of corporations and the government.
Have we established that a US NGO is accepting "CSAM" hashes from China or that they are cooperating with them at all? That seems unlikely and Apple hasn't yet announced plans with how they're going to scan phones in China, I mean wouldn't China just demand outright to have full scanning capabilities of anything on the phone since you don't have any protection at all from that in China?
> Have we established that a US NGO is accepting "CSAM" hashes from China or that they are cooperating with them at all?
I believe Apple's intention is to accept hashes from all governments, not just one US organization. One of their ineffectual concessions to the criticism was to require two governments provide the same hash before they'd start using it.
China can definitely find a state government requiring some cash injection to help push the hash of a certain uninteresting square where nothing happened into the db
Sure, but Apple receives far less backlash if the system is applied to all phones and under the guise of "save the children". This would allow Apple to accommodate any nation state's image scanning requirements, which guarantees their continued operation in said markets.
The main announcement was Apple was getting hashes from NCMEC but they also listed ICMEC and have said "and other groups". Much like the source database for the image hashes the list of sources is opaque and covered by vague statements.
Maybe, but it probably stretches farther back than that, maybe even to before sliced bread or cool beans. Ten years before The Hitchhiker's Guide there was a robot, HAL, who woudn't open the airlock for a particular astronaut.
They wouldn't. They would force apple to add hashes to things that the CCP doesn't like such as winnie the pooh memes and use turn Apple's reporting system into yet another tool to locate dissidents. How would Apple know any different. Here are some hashes, they are for CSAM trust us. They built a framework where they will call the cops on you for matching a hash value. Once governments start adding values to the database they have no reasonable way of knowing what images those actually relate to. Apple themselves said they designed it so you couldn't derive the original image from the hash. They are setting themselves up to be accessory to killing political dissidents.
I would expect Apple to say the same thing if the CCP proposed a system of scanning devices last month. I fail to see how this system changes the calculus for how Apple will deal with authoritarian governments.
If Apple could stand up to them before this system, why can't they stand up to them with this system?
The difference is the ease with which they can demur. Before, it would be a whole heck of a lot of new, additional work. They also have the problem of actually introducing it without being noticed, or having to come up with some cover for the new behavior.
Now? Well now it's real simple. It will even conveniently not expose the actual images it's checking for. Apple now has significantly less ability to rationally reject the request based on the effort and difficulty of the matter.
Even Apple's own reasons to reject the request have decreased. It would have legitimately cost them more to fulfill this request before, even if China did want to play hardball. Now, they have greater incentive to go along with it.
As far as I'm aware, this system is not new. It is only moving from the cloud to the local device. If the cloud was already compromised, which it seems like it would be in your logic since all the same reasoning applies, I don't understand the complaints about it moving locally.
In my mind there are two possible ways to view this.
We could trust Apple last month and we can trust them today.
We couldn't trust Apple last month and we can't trust them today.
I don't understand the mindset that we could trust Apple last month and we can't trust them today.
>It is only moving from the cloud to the local device.
But isn't that exactly why this is such a big deal? It sets a precedent that it's ok that devices are scanning your local device for digital contraband. Sure, right now it's only for photos that are going to be uploaded to iCloud anyway. But how long before it scans everything, and there's no way to opt out?
I don't see this as so much a question of apple's trustworthiness, I see it as a major milestone in the losing battle for digital privacy. I don't think it will be long before these systems go from "protecting children from abuse" total government surveillance, and it's particularly egregious that it's being done by apple, given their previous "commitment to privacy".
>But how long before it scans everything, and there's no way to opt out?
Do we think this is detectable? If yes, then why worry about it if we will know when this switch is made? If not, why did we trust Apple that this wasn't happening already?
That is the primary thing I don't understand, this fear rests on an assumption that Apple is a combination of both honest and corrupted. If they are honest, we have no reason to distrust what they are saying about how this system functions or will function in the future. If they were corrupted, why tell us about this at all?
It feels like you're viewing this as a purely hypothetical question and ignoring reality. No company is 100% good or bad, and it doesn't make any sense to force all possible interpretations into good/bad.
>If not, why did we trust Apple that this wasn't happening already?
I do not trust Apple. I don't really trust any major tech company, because they put profit first, and everything else comes second. I believe that a company as large as apple is already colluding with government(s) to surveil people, because if a money-making organization is offered a government contract that involves handing over already collected data, and it was kept secret by everyone involved, why would they refuse? I know that's very cynical, but I can't see it any other way.
But that's beside the point, which is that what apple is doing is paving the way for normalization of mass government surveillance on devices that we're supposed to own.
>If they were corrupted, why tell us about this at all?
So that we can all get used to it, and not make a big fuss when google announces android will do the same thing. It's much easier to do things without needing to keep them a secret. This is in no way only about apple, they're just breaking the ice so to speak.
>I do not trust Apple. I don't really trust any major tech company, because they put profit first, and everything else comes second.
Then you should have never been using a closed system like Apple in which they had control over every aspect of it. That is my fundamental point. I'm not saying you should trust Apple. I am saying this shouldn't have changed your opinion on Apple.
>So that we can all get used to it, and not make a big fuss when google announces android will do the same thing. It's much easier to do things without needing to keep them a secret. This is in no way only about apple, they're just breaking the ice so to speak.
I just need more evidence before I believe a global conspiracy that requires the coordination between both adversarial governments and direct business competitors.
You're viewing trust in apple as a binary choice. It is not. Trust is a spectrum like most things. You need to get away from that digital thinking. It's the whole reason we have to challenge government and be suspicious of it. It's the same with companies.
I view trust more as a collection of binary choices than one single spectrum. Do I trust Apple to do X? There is only two possible answers to that (or I guess three if we include "I don't know"). If the answer isn't binary, then X is too big.
In this instance the specific question is "Do I trust Apple to be honest about when they scan our files?". I don't know why this news would change the answer to that question.
Are we going to need to reverse engineer every single Apple update to make sure the feature hasn't creeped into non-iCloud uses? Is the inevitable Samsung version of this system going to be as privacy-preserving? How are we sure the hash list isn't tainted? All of these problems are solved by one principle: Don't put the backdoor code in the OS to begin with.
>It is only moving from the cloud to the local device.
That's the point. Yesterday someone posted a fantastic TSA metaphor where they are doing the same scans and patdowns but with agents permanently stationed in the privacy of your home where they pinkie promise it will only be before a trip to the airport and only checking the bags you will be flying with.
You know food poisoning is dangerous and you'll be safer with a food taster to make sure nothing you eat is spoiled. I'll just help myself to your domicile and eat your food to make sure it's all safe. I already made a copy of your keys to let myself in. It's for your own good.
> The difference is the ease with which they can demur.
If Apple can be cowed by China into adding fake CSAM hashes by threat of banning iPhone sales, they could be cowed to surveil Chinese citizens in the search for subversive material. It's no skin off China's back if it's harder for Apple -- they'll either make the demand or they won't. This changes basically nothing.
It's kinda true, but ignores how humans really work. Apple will be pushed around to a degree, but there will be limits. The harder the ask now the less China can ask later. And the more Apple can protest about the difficulty and impossibility and other consequences they will face, the more likely China is to back off.
Both sides want to have their cake and eat it too, and will compromise to make it basically work. But if China makes demands so excessive they get Apple to cut ties, China loses. Apple has the money, demand, customer loyalty, and clout to make things real uncomfortable. Apple would have to pay a hefty price, but if any company can do it... it's them.
So I don't think it's fair to say that no matter what China will just demand whatever whims strike it each day and everybody will play ball or gtfo. That just isn't how shit works.
Apple does have some degree of leverage over the CCP too. I realize its not possible today... but in 3-5 years, Apple may be in a position to move some/all of their manufacturing elsewhere.
The direct job losses are one obvious problem for the CCP but a company like Apple saying "We're moving production to Taiwan/Vietnam/US because of security risks in China" would be catastrophic for the (tech) manufacturing industry as a whole in China. No sane Western based CEO will want to be seen taking that security gamble.
Do I think Apple would do that and forgo the massive Chinese smartphone market? That's another story.
Tin-foil hat time: Who's to say that they could stand up to them before this system? The system itself could have been proposed by the CCP in the first place. I'll take my hat off now.
>> it's not obvious how we can trust that a rogue actor (like a foreign government) couldn't add non-CSAM hashes to the list to root out human rights advocates or political rivals. Apple has tried to mitigate this by requiring two countries to agree to add a file to the list, but the process for this seems opaque and ripe for abuse.
> If the CCP says "put these hashes in your database or we will halt all iPhone sales in China", what do you think Apple is going to do? Is anyone so naive that they believe the CCP wouldn't deliver such an ultimatum? Apple's position seems to completely ignore recent Chinese history.
Apple policy << Local laws and regulations. It's very hard to believe their policy is anything less than a deliberate PR smokescreen meant to disarm critics, because it has so many holes.
Edit: just thought of another way Apple's policy could be easily circumvented and therefore cannot be regarded as a serious proposal: get two countries to collaborate to add politically sensitive hashes to the list (e.g. China and North Korea, or China and Cambodia). That doesn't even require Apple to be coerced.
Apple’s trying to mitigate this by putting themselves to a more internationally focused standard of matches against at least two separate sources of CSA hashes, if my understanding of their announcements/follow-ups is right.
Separately, the US has even greater pressure on Apple in the case they want to unilaterally add database images, considering they have a actual chance and means to jail (and run through the legal ringer) whomever tells them ‘no’. And that’s just the overt pressure available; I think this is a more likely potential for trust violation here, even if both could come to pass.
I think one of the other stories on this talked about "watermarking" in order to create a hash collision. So it need not be a non-CSAM image, a TLA could just alter an image to make it collide with a file they want to track, other countries would agree that file's hash should be in the hash list and bingo: Apple presumably provide the TLA with a list of devices holding that file.
Except that there's a threshold involved. A single matching file doesn't trigger an investigation; it takes multiple (10+, maybe more?) matches to do that.
Either way, it's high enough that adding a single file to the set wouldn't be a useful way of finding people who have that file. One attack I can imagine would be to add a whole set of closely related photos (e.g. images posted online by a known dissident) to identify the person who took them, and even that would be a stretch.
> If the CCP says "put these hashes in your database or we will halt all iPhone sales in China", what do you think Apple is going to do?
Or maybe China already said "put in this CSAM check or you can't make or sell phones in China".
Since Apple's position is contrary to their previous privacy policy and doesn't seem to make a lot of sense, it's quite possible extortion already happened (and not necessarily by China).
It wouldn't specifically be from domestic intelligence, it would be from a powerful member of Congress with a relationship to Apple (specifically the board/management), acting as a go-between that would try to politically lay out the situation for them.
Hey Apple, we can either turn up the anti-trust heat by a lot, or we can turn it down, which is it going to be? Except it would be couched in intellectually dishonest language meant to preserve a veneer that the US Government isn't a violent, quasi-psychotic bully ready to bash your face in if you don't do what they're asking.
The interactions with intelligence about the new program would begin after they acquiesce to going along.
It's too easy. There's an extraordinary amount of wilful naivety in the US about the nature of the government and its frequent power abuses (what it's willing to do), despite the rather comically massive demonstration of said abuses spanning the entire post WW2 era. Every time it happens the wilfully naive crowd feigns surprise.
What about giving a censored version of the appropriate image? Like put a big black rectangle covering whatever awful thing is the subject, and just (e.g.) show some feet and hands and a background.
Then you could provide a proof that an image which is the same as the "censored", one except for the masked part, has the perceptual hash specified. I don't know if this is technically feasible (but I'd be happy for someone knowledgeable to opine). I also admit that there are secondary concerns, like the possibility of recognising the background image, and this being used by someone to identify the location, or tipping off a criminal.
Probably it would only be appropriate to do this in the case of someone being accused, and maybe then in a way where they couldn't relay the information, since apparently they don't want to make the hash database public.
Also, for the record, I'm spitballing here about infosec. This isn't me volunteering to draw black boxes or be called by anyone's defense.
Why does the CCP even need to talk to Apple? They have a database of CSAM, and can modify an image in this set to collide with a special image they're looking for on people's phones. They then share their new modified cache of CSAM with other countries ("hey, China is helping us! that's great!") and it gets added to Apple's database for the next iOS release, because it looks to humans like CSAM. Only the CCP knows that it collides with something special they're looking for.
Now that we know that collisions are not just easy -- but happen by themselves with no human input (as evidenced by the Imagenet collisions), we know this system can't work. Apple has two remaining safeguards: a second set of hashes (they say), and human reviewers.
The human reviewers are likely trained to prefer false-positives when unsure, and so while a thorough human review would clearly indicate "not CSAM" for the images the malicious collisions match, it doesn't feel like much of a safeguard to me. (Remember the leaked memo from the US's organization -- they called our objections "screeching voices". I'm sure the actual reviewers think similarly.)
I assume the people reviewing CSAM for the CCP will be in China, so they can be in on the whole scheme. (In the US, we have slightly better checks and balances. Eventually the image will be in front of a court and a jury of your peers, and it's not illegal to have photos that embarrass the CCP, so you'll personally be fine modulo the stress of a criminal investigation. But that dissident in China, probably not going to get a fair trial -- despite the image not being CSAM, it's still illegal. And Apple handed it directly to the government for you.)
I don't know, I just find this thing exceedingly worrying. When faced with a government-level adversary, this doesn't sound like a very good system. I think if we're okay with this, we might as well mandate CCTV in everyone's home, so we can catch real child abusers in the act.
Apple's CSAM detection has nothing to do with this. The vector for an authoritarian government getting blacklists into tech is that government telling the tech vendor "ban this content. We don't care how."
Look, if the government tells Apple to do something, then Apple can push back, but then has to do it or pull out of the country. That's the way it was, and is.
Now, what has actually changed? The two compelling push-back against a country's demands for more surveillance etc. are:
a) it is not technically feasible (eg, wiretapping E2EE chats), and
b) it is making the device insecure vis-a-vis hackers and criminals (eg, putting in a backdoor)
The interesting question is: Have these defences against government demands been weakened by this new technology? Maybe they have, that would be my gut feeling. But it is not enough to assert it, one must show it.
At this point, with all the easily producible collisions, the Gov't could just modify some CSAM images to match the hash of various leaked documents/etc they want to track. Then they don't even have to go thru special channels. Just submit the modified image for inclusion normally! (Not quite that simple, as they would still need to find out about the matches, but maybe that's where various NSA intercepts could help...)
Not quite, a CSAM hash match triggers another match within Apple to avoid false positives and then a human review. It wouldn't be trivial for them to extract matches out of that, and they'd only be able to track files they already know the contents for.
I would think they could more easily just make your phone carrier install a malware update on your phone, rather than jumping through all of these hoops to get them access they already have.
Plenty of data is leaking out of people's phones already as can be seen from, e.g. the Parler hack.
I tried to address the issue with finding out about the match at the end of my comment. I agree it's not exactly practical without other serious work to intercept the alerts, have 'spies' in the apple review process, etc. Much easier ways would exist at that point, but it's somewhat amusing (in a horrifying way) that some bad actor could in theory use modified CSAM as a way to detect the likely presence of non CSAM content using generated collisions.
All of the major tech companies already scan images uploaded to their services so isn't this already theoretically possible now? How is the situation changed by Apple using on-device scanning instead of cloud scanning (considering these images were going to be uploaded into iCloud anyway).
Of course Apple would do it. They’d look like fools for saying they want to stop CP, then refusing the listen to the government of nearly a billion people when it says “Please ban this newly produced material”.
At best, they would look biased. At worst, they would be sending a signal that they don’t care about Chinese children.
You wouldn’t even need government strong arming, mobs of netizens would happily tear Apple down.
It seems like it wouldn't take that. If you can generate a colliding pair of images, you could probably create a pair where one of the images might get attention with child porn groups and thus, shared around enough to end up in the CSAM database. And where the other was innocuous.
The only way this matters [today] though is if apple turns it on for all pictures, not just icloud ones. Presumably "Chinese iCloud" already scans uploaded photos cloud side.
Unless the goal is to simply the effort/expense of scanning by making it a client process.
From the post yesterday discussing collisions, it doesn't seem outside the realm of possibility to take an image of CSAM and modify it until it has the hash that matches another target wanted either.
Or why not "Hey Vietnam, Pakistan, Russia, etc put these hashes into your database please and thanks." I mean the CCP has allies that are also authoritarian. Why would they have to threaten Apple directly? This is also how you get past the Apple human verification. Just pay those Apple workers to click confirm.
They'd do it directly because it's expedient and useful. If you're operating such a sprawling authoritarian regime, it's important to occasionally make a show of your power and control, lest anyone forget. The CCP isn't afraid of Apple, Apple is afraid of the CCP. Lately the CCP has been on a rather showy demonstration of its total control. If you're them it's useful to remind Apple from time to time that they're basically a guest in China and can be removed at any time. You don't want them to forget, you want to be confrontational with Apple at times, you want to see their acknowledged subservience; you're not looking to avoid that power confrontation, the confrontation is part of the point.
And the threat generally isn't made, it's understood. The CCP doesn't have to threaten in most cases, Apple will understand ahead of time. What gets made initially is a dictate (do this), not the threat. If something unusual happens, such as with Didi's listing on the NYSE against Beijing's wishes (whereas ByteDance did the opposite and kowtowed, pulling their plans to IPO), then, given that Didi obviously understood the confrontation risk ahead of time and tested you anyway, then you punish them. If that still isn't enough, you take them apart (or in the case of Apple, remove them from the country).
I'm just saying that there is another avenue. To be clear, this isn't a "vs" situation. It means that they have multiple avenues.
To also clarify, the avenue of extortion isn't open to every country. But the avenue I presented is as long as that country has an ally. I'm not aware of any country not having an ally, so I presume that this avenue is pretty much open to any country.
Could you provide specific evidence that China has and would do this? I’ve a hard time recalling any specific cases. Maybe nation-states do this kind of thing, but I’m only aware of the countless times the United States has done this. What’s the recent history?
Assuming you are American — where do you think your iCloud keys are stored? You do know Apple cooperates with US LE and intelligence? This is a nothing hamburger.
I concede that there are overlapping issues there. But if you're saying there aren't places China goes with this sort of info that's different from the US, I don't think any debate would change your mind.
So far I have no specific reason to think China goes places that are as deeply consequential and chilling than the US. What’s Assange up to these days?
Apple has to store data of chinese citizens in china? And Apple has to adhere to chinese laws in china? How insane.
It's crazy how deluded the "CCP crowd" are. Apparently, the "CCP crowd" thinks companies are allowed to do business in another country and not abide by their laws.
Are you going to go insane since the EU requires tech companies to store EU citizens data within the EU?
If the CCP says "put these hashes in your database or we will halt all iPhone sales in China", what do you think Apple is going to do? Is anyone so naive that they believe the CCP wouldn't deliver such an ultimatum? Apple's position seems to completely ignore recent Chinese history.