> This allows WhatsApp to MITM. Whatapps can rekey both Alice and Bob, decrypt both their messages from that point onwards (incl unsent messages) and forward them re-encrypted with their real keys. The only notification might be that rekeying warning, if the users have turned it on. In this scenario even the double-checkmarks are present. This is contrary to WhatsApp's claim that even they cannot snoop.
You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
> PS: I just check on my phone if those notifications were turned on. There were not. And I'd never turn those off myself, which leads me to conclude that the rekeying notifications are off by default (in their android app)
Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.
I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
I think it's fair to say that you are the world thought leader on these matters right now.
One thing that the rest of us are wondering right now is:
> I've been impressed with the level of care that WhatsApp has given to that requirement.
To what degree do you really know that? Is there a place where we can read about your interactions with Facebook, the level of access they've given you, and the degree to which they have allowed your recommendations to shape the contours of their implementation?
Nothing less than the strength of dissent lies in the balance of questions like these.
> I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
I agree that the jump to scary terminology is dangerous.
However, at the end of the day, I think that many of us have been trying to make a simple point that shows that there is a sort of crossing of that line:
WhatsApp claimed that they were simply unable to intercept communications, and now we find out that, without any user interaction or approval, messages which haven't received the "double check" are re-transmitted when a new key is generated.
In some highly specific but easy-to-imagine scenarios (eg, a journalist on the ground in Tahrir Square using WhatsApp to report on conditions, receiving no replies), WhatsApp is hugely vulnerable in a way that most of us didn't think it was.
So look: nobody here is trying to diminish your tireless work and your accomplishments in bringing freedom into the information age.
But there are nuances here that are important, and fleshing them out is a big part of what this community is about.
> But there are nuances here that are important, and fleshing them out is a big part of what this community is about.
The entire point of the crypto community is to maintain as little trust as possible unless you can be highly certain about things.
The media reaction to "OMG WHATSAPP IS FOR SURE NOT SAFE" is a HUGE over reaction. But in an industry where audits and open source are huge factors in trust... WhatsApp doesn't do a whole lot. Phrased better, the article could have done a great job of explaining how to secure yourself and enable the messages, rather than just fear mongering.
Lets be honest. Facebook doesn't have a great privacy record. Theyre an advertising and data harvesting company. I basically trust them 0. But I trust Moxie a lot (its possible that he's been bought out by facebook/egyptian government for billions of dollars, but Im just gonna keep trusting him).
Honestly, Moxie saying that WhatsApp has a decent implementation of Signal does a lot more for my concerns than Facebook saying the exact same thing (though I too would love to know more about how much Moxie knows about whatsapp). I don't use whatsapp, but Im less prone to go "oh yeah, you def dont want to use that, its a facebook product!" like i would for skype/MS.
Its reassuring to know that if someone tried this, I could be notified of it, which means it seems like no one would really try this unless it was SUPER worth it (I dont think facebook is going to try to MITM and expose themselves so they can hear about my weekend drinking plans). So for common folk, I think it would be pretty safe. And if you are talking about things that require crazy opsec, definitely turn notifications on and verify those numbers.
I think that here you've made a great point. For many users, the level of privacy that Whatsapp gives is unnecessary, but if you are the person that needs to discuss mission-critical matters over Whatsapp, they give you the possibility to do that safely.
The only problem would then be that they can MITM one message, even if they'd be caught that way. I doubt they'd do that for less than world-changing messages, but still that's the only problem if you enabled the notifications and checked the numbers.
What does trust have to do with this? The trade-off has been clearly explained. As it stands, WhatsApp is great for protecting sexts and low value conversations if you're not famous (99.99% of everyone), but if you're snowden, or hillary, there is no protection - contrary to what has been advertised.
To my understanding, that's simply not true. What you can accurately say is that with key change notifications turned on, any one* message could be exposed without any means of recourse, but subsequent exposures would require user error.
*Question for anyone: could this apply to a "batch" of messages? That is, could servers hold back the delivery of some number of messages and then the attack could be applied to all such undelivered messages? But once the attack took place, the double check would be displayed on the sender's phone and the notification of key change would appear. My understanding is that the answer to the question is 'Yes'.
Very good question, and I haven't seen a definitive answer to it yet.
The responses by Bob are presumably numbered, and some might be delivery receipts, or contain delivery receipts (e.g. A cumulative ACK as in TCP). Could the server selectively suppress the read receipts, or manipulate the cumulative ACK? If it simultaneously triggered rekeying on Bob's side, presumably yes. But not seen a definitive statement on that.
I've little to add to this, other than the point that the UK's IP Act allows GCHQ (and other UK government agencies) to abuse this issue individually or en-masse against anyone, anywhere, more or less at will.
That's the world we're in now. I respectfully disagree with Moxie's point about key verification. I think the point you raise about easy-to-imagine-scenarios would've been laughed away years ago, but is not only realistic, but also distinctly possible now.
Whatsapp told the original reporter that they had no plans to fix the issue. The question is that in light of mass spying by the intelligence services, what else will Whatsapp not fix?
> The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
That defense, which happens to be the only defense, is turned off by default in WhatsApp.
You seem to argue they do so because it's bad UX to present such notification by default. That's - in my humble opinion - like suggesting browsers should turn off TLS chain errors by default because it's bad UX and just proceed with the connection as if nothing happened...
> That defense, which happens to be the only defense, is turned off by default in WhatsApp.
> You seem to argue they do so because it's bad UX to present such notification by default. That's - in my humble opinion - like suggesting browsers should turn off TLS chain errors by default because it's bad UX and just proceed with the connection as if nothing happened...
One thing we've learned over the years is that security warnings should not be displayed to consumers under "normal" (eg. non-critical) circumstances, otherwise it creates a condition of "warning fatigue."
TLS certificate errors are not something that should happen under normal circumstances. When a TLS certificate fails to validate, something is really wrong. As we've gotten better about ensuring those conditions, browsers have made it harder and harder to get past the warnings, because they're not warnings anymore -- they're error conditions.
Key changes in a messenger are totally different. They happen under normal conditions, so putting them in people's faces by default has the potential to do more harm than good. If we can make them workable, systems like CONIKS or Key Transparency might be in our collective future, but if you don't like systems that are fundamentally "advisory" (don't tell you until after the fact), you're not going to like those new systems at all either.
For now, I think a fact of life is that most people will not verify keys whether the warnings are there or not, so I think what's most important is that the server can't tell who is and who isn't.
I'd love to hear other ideas about how to improve the UX of interactions like this, but I think they have to include a basis in the assumption that we can't fundamentally change human behavior and that we can't just teach everyone in the world to be like us.
Why not phase the message differently, e.g. "It looks like (user) is chatting from a new device. Is this correct?"
Warning about unusual account activity seem to be very common these days, so why not using them here.
The way the warnings are presented as part of the chat history (a very good idea) also means they could be used after-the-fact to figure out when an account was overtaken, even if the warning was initially ignore. I figure even non-technical users would like to know that, after one of their contacts tells them their account was hacked.
Additionally, why is an ignored warning worse than a warning that is suppressed to begin with? That seems to me like a landlord that decides not to install smoke alarms because "the tenants could get used to the sound" - when most of the tenants are not even aware of the concept of "fire".
Finally, I don't find the "it's important the server doesn't know" argument not convincing. If you conclude that the vast majority of people doesn't have the warnings enabled and the costs of hitting someone with warnings is low, that would make snooping still a very low-risk activity.
Summing up, I think the very least consequence Facebook should take from this is to make the warnings in-by-default instead of off-by-default.
> Why not phase the message differently, e.g. "It looks like (user) is chatting from a new device. Is this correct?"
Because of exactly what Moxie said in his post. This is a relatively common occurrence in practice. Someone gets a new device. Or uninstalls/reinstalls the WhatsApp app. Or wants to read messages on their laptop, too. And so on.
Warning everyone about this all the time leads to people becoming subconsciously blind to these notifications — even to people who should care about them. The solution taken by WhatsApp is a great compromise in this situation. Not everyone will have it on, but the odds are in favor that someone they might want to intercept messages for will. And if they can't know who has the notifications enabled and who doesn't, they run the risk of tipping their hand that they're doing it at all.
That's why you include a checkbox underneath with the label "Do not show me this warning in the future (insecure)". And then a setting to turn it back on. It's not rocket science.
This shit is really easy to armchair quarterback over the Internet where nobody wins and the points don't matter, but the reality is that figuring out how to design crypto applications in a way that keeps users secure without users disabling or ignoring sometimes-important security problems is a very hard problem. In fact, it may very well be the current hardest practical problem in information security.
So yeah, it is actually kind of like rocket science, and I guarantee you that Moxie has spent orders of magnitude more time thinking with, dealing with, and collecting data on this kind of problem than you or I combined.
And we're not moxie's investor meeting or senate hearing comittee. This is a layman discussion thread that he decided to join and answer questions in. (Big respect to him for doing that) So I believe even "stupid" questions should be allowed if they increase understanding or bring up new points.
Furthermore, this is an argument via authority[1]. Of course there are experts, but even an expert should explain and discuss his rationale in the interest of sharing knowledge (which moxie is doing here) - otherwise problems like this will stay "hard" for a long time.
I did not chastise GP for asking questions. I chastised GP for his hubris in looking at this problem for all of five minutes and confidently asserting that he has a simple, obvious solution that somehow a literal expert in the field completely missed, then claiming offhand that the problem isn't "rocket science" when in fact it's, in my estimation, one of the hardest practical problems in the entire field. We know far, far more about building secure theoretical cryptosystems than we do about ensuring actual humans use them in a way that doesn't break the seal and void the warranty, so to speak.
And Moxie has explained his rationale in this thread. Argument to authority isn't always wrong — particularly in the case where the other side has no data or theory to back up their claims. For instance, I personally only know little about the actual mechanisms behind anthropogenic climate change. What I do supports the notion. But I'd be lying if I didn't acknowledge that the most compelling argument is the absolute agreement by 99.9%+ of the actual experts in the matter.
Likewise, in the absence of any obviously compelling evidence validating GP's approach, combined with Moxie's explanation above and my own experience as a security engineer, I'm going to go with the guy with literally decades of both theoretical an operational experience here.
The people who are most likely to be snooped on are also more likely to have the notifications turned on, so I don't think it's such an easy choice for an attacker.
The entities this is designed to thwart are not going to want to risk leaving behind a trail of evidence, even if the risk is small.
It also prevents fishing expeditions, since the risk would quickly add up as more targets were added.
All that said, a one-time prompt to turn on the notifications for users that care about extra-strong security seems like a good idea to me.
The fact of the matter is, that when you disable the only defense against MITM by default, you should not claim your stuff is secure and end to end encrypted, because it is not. It's really easy as that.
Warning fatigue, "most" users not knowing how to do it or doing it wrong etc, are indeed hard problems to solve. There are indeed no easy answers to this, or else somebody would have come up with something already. But just because it's not easy does not mean you're entitled to just lie about the security properties of your system to your users.
>WhatsApp's end-to-end encryption ensures only you and the person you're communicating with can read what is sent, and nobody in between, not even WhatsApp. [...] All of this happens automatically: no need to turn on settings or set up special secret chats to secure your messages.
Given that the only defense against a WhatApp MITM is turned off by default, the "not even WhatApp"/"automatically: no need to turn on settings" part is just not true.
At one of my jobs the network team uses a thing called "Forcepoint's TLS inspection" (aka Websense) (aka Raytheon). My browser happily let's that network team MITM me all day long without a peep, and logs & archives all my TLS traffic for who knows how long.
The funny thing is a VM I setup from my same laptop tried to make an https:// connection and the browser outright refused, without any possible workaround until I imported the Forcepoint CA cert.
Security people must love us users so bad. Love you, too! xox
(Note: the same network team imaged the laptop in the first place, and it's against my contract to re-image it. Hence the Forcepoint CA cert's presence in my browser's root chain. I prefer to call this LAN-In-The-Middle.)
This is absolutely standard in the UK financial services industry, and ultimately required for compliance with financial regulators.
The alternatives are running agents on your machine that capture everything you do (which most shops I've been at do as well) and removing local administrative rights to prevent users from removing auditing software and deploying workarounds like your VM (also the norm now).
This has absolutely no bearing on the security of HTTPS/TLS as a whole, the chain of trust is working exactly as it's supposed to in this instance. It's distasteful as an end-user (and even more distasteful as one of the network engineers deploying it, wondering why it's not Information Security's job instead), but you can always quit that job and find another one (yep, that's what I did).
If you are in Europe (or at least some countries in Europe), it's illegal to read in-transit messages even if the recipient is at work and the interceptor is their employer.
Reference? I've worked at several companies claiming they are allowed to do this (which I don't necessarily believe, of course). Has it been tested in court?
Great link, thanks. However, it doesn't back up the claim you made. A few quotes:
"In Europe, there is technically no uniform body of “European law” that directly applies between employers and employees"
"Courts and scholars increasingly reference EU law, usually without clarifying whether the existence of a particular civil right protection in the EU Charter actually changed the legal situation as a matter of law, rather than as a matter of public policy."
There's a lot of fuzziness around implementation of a very loosely worded human rights clause, combined with prior national laws. Mostly aimed at protection from Government. Previous tests have mostly been cases where the individual did not consent or some such thing.
More directly, EC data protection directive hinges on: 1) contractual obligation; 2) consent; 3) statutory obligations; 4) balancing test. It seems highly likely that most business can legally MITM me if I sign the contract they want me to sign.
Most - but not all - of the private sector examples given (including Germany and France) hinge on the employer not following the correct process: either not notifying the employees, not gaining consent, or opting to allow private communications at work which are strictly forbidden from being monitored (in some countries).
That said, there is also:
"A number of EC member states, including Germany, Italy, the Netherlands, Spain, and the United Kingdom, strictly prohibit ongoing monitoring of employee communications and permit electronic monitoring only in very limited circumstances (e.g., where an employer already has concrete suspicions of wrong-doing against particular employees),265 subject to significant restrictions with respect to the duration, mode, and subjects of the monitoring activities"
It's not immediately clear if the applies to specific, targeted monitoring. The footnote says gives an example where informing the employee of valid reasons for investigating is sufficient.
(Note: I made no claims, just jumped in to provide references about the state of affairs in some European countries)
The pages I gave are specific case studies of the law in Germany & France. You are right that there is not too much overarching EU level legislation about these things, it's generally in national legislation and up to each country.
Less than 2% of the total staff probably realize that all their https traffic is being intercepted. I find it odd that we try to teach everyone the difference between http and https, and then we do this.
Having started originally with Threema before I gave in to WhatsApp I kind of like the trust levels they established in the UI. Might be an improvement for the WhatsApp UI to downgrade the trust level visually in case of unexpected key changes.
Beside of that, and thinking through this comment by moxie, I fear he is right. I've a bunch of dead keys listed in my Threema contact list. All from people which are in general quite tech savvy but still were too lazy to transfer their keys on phone changes. And I already had to rescan (the QR code) quite a bunch of people when I meet them maybe once a year.
Thats for my modest 20 something Threema contacts. Now think about the not very tech savvy average whatsapp user with his 150+ contacts. Maybe about a third of them will change their phone or MSISDN throughout a year. If you see 50 alerts per year in your chats that something changed, how long will you care to verify those changes that they're valid?
I don't like those defaults choosen by WhatsApp and once I knew about it I changed it. But at the scale of WhatsApp I understand the decision they made. You might also want to add the common argument that in the real world close to nobody will give a shit about the encryption. Since Snowden a few percent more care but it's still a small minority. So to bring at least some security to the majority that do not care is still a win. Everyone else has to make informed decisions about their own configuration.
> Key changes in a messenger are totally different. They happen under normal conditions
This doesn't have to be the case. If you stop coupling a key to a device and instead couple a key to a person (generating a key deterministically from a password for example), they can be changed far more rarely.
That was just an example. You could also pair the key to a person by some other method, such as storing a copy of it on a storage medium other than their phone.
Requiring a external storage medium would kill the service. I think you have to separate a service made for the masses and a service with focus on security/encryption. For WhatsApp there will be some instances where you have to choose between security and convince, and they have choose the former, which is only naturally.
There is one pass phrase I remember, 5 passwords, 2 PINS, 2 phone numbers. My password manager and address book remember hundreds of passwords, phone numbers and emails each.
For some reasons everybody uses an address book, many people let browsers remember passwords but almost everybody resists the idea of using a password manager and end up with low entropy passwords.
> How would these 'trivial' steps look like if a telephone gets stolen
Just as 'trivial' as it is Facebook to swap your key at the request of a government. You should have to start from a blank slate (zero trust) in that situation.
Getting your phone stolen is an extraordinary event that warrants requesting some attention from your contacts, even if only to inform them of the old identity being compromised. And then you might as well have them verify a new key.
Buying a new phone and switching to it
Reinstalling your phone OS because "it's slow"
Reinstalling WhatsApp because "it crashes" or "it's slow"
Swapping a phone because the screen is broke or I dropped into the toilet
I think it's romantic to think that 1 billion of WhatsApp users can be taught about the risks of MITM attacks and how to do a key check.
This is what I do: I have the warnings turn on. When the key change warning appears, and if I care enough about the person and the discussions we have, I try to match the warning with a real world event, so either I already know that something happened, or I try to remember to ask somehow if the person repaired or changed the phone. If I can match the warning with such an event, I feel satisfied. Otherwise, i ask for a key check when I meet that person in real life.
It would help if WhatsApp provided a UI to show whether I have verified the current key of each user (something like a green check-mark next to the name) because it's hard to remember.
That is basically how 2FA works with Apple devices. You use an old device to approve new ones. Sure, if you lose your cloud account, laptop and phone all at once you'll need to start from scratch. But under normal circumstances it reduces the amount of blind trust.
Moxie, what about showing a positive UI for users you have verified keys with? Something like the verified checkmark on twitter. I'm ok with this being client side of course, and possibly even lost if you reinstall the app (better than nothing).
I like to verify keys of my main professional contacts on WhatsApp, but it's hard to remember who you verified keys with, and then whether the key was changed since last time you verified it.
Threema offers that. You do a QR code scan, after which a contact is marked as verified (3 green dots). Since Threema has a fixed key per user, these verifications are persistent and most people transfer their keys when switching to a new phone.
It would be good to be able to check how long the present safety number has been in place for. That would allow people who have become concerned about snooping to detect snooping back until that point (hey, when did you last change your phone?).
>TLS certificate errors are not something that should happen under normal circumstances. When a TLS certificate fails to validate, something is really wrong. As we've gotten better about ensuring those conditions, browsers have made it harder and harder to get past the warnings, because they're not warnings anymore -- they're error conditions.
Not paying Verisign your rent? That's an "error condition".
(Here of course referring to the choice of browser vendors to block access to web sites that offer secure end-to-end crypto via TLS, but merely haven't paid a browser-trusted CA to issue a new cert with a future expiration date.)
Would have been a fair statement a couple of years ago, but we live in a day when you can get free annual certs manually (Startssl) and free 90 day certs automatically (Letsencrypt).
The StartSSL CA is in the process of being blacklisted by major browser vendors because they issued a certificate for github.com to someone who clearly does not run github.com. [0]
LetsEncrypt just barely left beta (also this summer) and I'll admit that I haven't investigated it thoroughly, but it appears that some widespread devices are still incompatible (also consider the versions that accept LetsEncrypt; some of those are fairly recent, like CM 10). [1]
While some noble souls like LetsEncrypt have sought to remedy this rent-seeking behavior, it remains the fact that in most cases, a traditional CA is going to be required for a couple more years at least.
No, but they don't have to because (the vast majority of) users don't establish trust in website's TLS certificates themselves; instead, they use a trusted third party: the set of all trusted certificate authorities in their browser or operating system's root store. End-to-end encrypted messengers like Signal and WhatsApp don't rely on a trusted third party to establish trust, instead (rightly) leaving it up to users to establish trust between each other.
> That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
Moxie, some of us are of the opinion that [that] (implied) goal is certainly noble but ill-considered.
Modern state surveillance has 2 general unstated goals:
1) Create an atmosphere of fear to affect self-censorship. Some states (such as China) announce this as a matter of state policy. Others (such as US) drop hints. UK is somewhere in between.
2) Identify emerging memes, clusters, and thought leaders. This information is then used to counter, disrupt, and discredit/isolate (respectively).
(And yes, the stated public goals are to prevent terrorism, child pornography, and crimes.)
From the political angle -- activist angle, if you will -- the goal of "serving billions of people from many different demographics all over the world" is minimally misguided, and counter productive, and maximally a hazard.
I think you are wrong. When only a small portion of the population can use end-to-end encryption in their day-to-day communications, a state can declare it (e2e enc.) "suspicious" and achieve both goals far more easier.
I don't understand. How is it misguided, and who is it a hazard to? Are you saying the unstated goals of state surveillance are good ones which conflict with popular use of crypto, and therefore popular use of crypto is bad?
I THINK the commenter was saying that "serving billions of people from many different demographics all over the world" is inviting all of those different people together so you can betray them all at once.
> Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
I'm not sure what exactly is the reason for that, is it UX? like if someone get a new phone and creates a new key pair their friends will get scared because of the warnings?
> Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality.
Another fact of life are bad password choices, which is why gmail don't let you use "love", "sex" and "secret" as a password :)
Browsers, for instance, throw warnings when something is wrong with a cert. Even when 99% of the time it's some domain name issue or expiration date, I think it's a nice default. By letting Facebook rekey anytime you (fig) are making them kind of a CA. I don't think there is a good reason for that, specially not when Whatsapp claims that even they can't read your messages... it feels dishonest to me. But then again this is just a messaging app downloaded from Google Play running on Android, my expectations aren't too high...
The problem with key notifications being off is for those users who really want to be secure, and downloaded Whatsapp because they wanted E2E, but didn't know they had to go into settings and turn it on.
The problem with key notifications on-by-default is that regular users see warnings they don't understand and get warning-fatigue.
So how about making a default-on notification that is understandable for all users? Like:
::: It seems like Alice switched to a new phone (i)
where Bob can click the (i) for more info, or just ignore the notification. If Bob was security-conscious, he'd perk up at that message, while the majority would just go "meh" or congratulate them on their new phone.
Which is ok asthe people you know would understand that you are that type of person.
I would also pop up when someone reinstalled the app.
Would be rather annoying but it should be on as default and the first time it pop up give clear information.
Whatsapp is set to notify you when a key changes, this could be due to a change of phone or reinstalling the app. If you do not wish to receive these notifications click here.
Which would allow users who want it to keep and those that don't to turn it off after the first change.
Is there at least a record of what keys were used on both sides, so that I could verify later whether or not this has taken place?
> You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
I think it's still completely valid to say that WA should not claim to be unable to snoop. They can, and appear to be able to do so undetected with the default settings. Does the setup at least ask users if they want this feature on or off?
What would prevent WhatsApp from shipping a client where they control the rekeying notification setting remotely?
I suppose there must ultimately be some level of trust in WhatsApp that the client is doing what it says it is? Unless we're willing to sniff every piece of network traffic from it.
> That way the server has no knowledge of who it can MITM without getting caught.
What exactly do you think is the worst thing that could happen if you "catch" them doing this?
Now what do you think is the worst thing that could happen if they receive a subpoena or NSL or whatever that tells them to do this regardless of whether the user finds out or not (because the government wants the message contents that badly)?
> What exactly do you think is the worst thing that could happen if you "catch" them doing this?
[I'm not the OP, but my 0.02]:
Hopefully there would be an outcry, initially started by technically sophisticated communities like this, and credible articles in the Guardian, eventually causing significant user anger, and letting competitors gain against them. People running social networks care about mass user anger.
Hopefully that possibility keeps them honest.
Hopefully people don't cry wolf too many times, like today - slowly poisoning the watchdog!
> Now what do you think is the worst thing that could happen if they receive a subpoena or NSL or whatever that tells them to do this regardless of whether the user finds out or not (because the government wants the message contents that badly)?
This has got to primarily be a defense against ongoing mass surveillance.
If the government can compel them (via NSL or force or whatever) to change the service so that it just spies on a few targeted individuals, wouldn't it be easier to push these individual a malicious client update, rather than MITM the encryption and hope they have notifications off?
Does anyone know how to build a massively adopted network that resists targeted NSLs? I'm grateful we appear to have one that is resistant to pervasive monitoring.
It's rather disappointing UX and something that trains users to accept key changes of their contacts that Signal doesn't support affirmations of key continuity.
If you get a new phone without having lost the old one, it would be good to have a feature where Signal on the new phone shows its public key as a QR code, you scan it with Signal on the old phone and Signal on the old phone generates a protocol message to contacts indicating legitimate key roll-over without "key changed but you don't know why" UX.
>I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
Then release your product in a manner that let's people improve the UX and correctly label what is and isn't a backdoor.
>You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
> PS: I just check on my phone if those notifications were turned on. There were not. And I'd never turn those off myself, which leads me to conclude that the rekeying notifications are off by default (in their android app)
Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.
I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."