So RMS was right after all, OpenSource gives you visible security where proprietary products are encumbered with all sorts of unwated and even dangerous "features".
Yeah, but you'd presumably need to have open source hardware shipped to you, right? And then you'd still need to inspect the hardware / software as shipped, right? RMS was right about a lot of things, but remember that his most important ideas are still waiting on lots of other things to happen first before they become practical.
Not perfect but the alix boards[1] ship with a full board schematic. They also ship with the source code for the bios and some information that would help you write your own. All the major open source router projects work on them. 6f2 even supports 3g modems so you can multi-home your connection.
I know it is not perfect (you still have to trust the AMD chips and so forth) but it seems a lot better than just buying a linksys and hoping it works with OpenWRT.
Unless you're willing to 10x your integration costs by building a reliable verification process --- something that even specialists in hardware verification don't actually have --- and then painstakingly applying it to every piece of equipment you receive, full board schematics don't help you. For that matter, "free software" and "open source" are probably providing a false sense of security.
For some hardware attacks, like transistor-level dopant mask swaps, there isn’t any reliable way to detect them, not even optical inspection (because the layout is unchanged) nor functional testing (because passing BIST and external benchmark results can be faked). See the paper from UMass: https://people.umass.edu/gbecker/BeckerChes13.pdf
Since the “detection” I’m referring to is already extremely difficult before the chip even leaves the legitimate chip manufacturer’s facility, what hope could someone have of opening a modern IXP-scale router and determining if any of the zillion chips inside has been trojaned by double-0-mailman?
I think we're operating from the assumption that the fab is itself not compromised; if you think it might be, you're right, all is lost. But I think we're converging on the same point: all is lost anyways.
I know you're a noted security researcher but it seems like you're write about security as if it were a binary "secure or insecure".
How do your comments in this thread relate to the fact that nothing can ever be perfect, and different degrees of sophistication in security can only ever reduce the probability of an attacker's success, or the percentage of attackers that make it through everything?
This raises an interesting point about "tamper revealing" methods of security. Open source, publicized hashes of executables, even key exchange protocols, all of these prevent attacker tampering by making it more likely attacks (or even just bad code) will get noticed. They don't strictly prevent tampering, they discourage it.
Take DHKE. It "defends" against MITM. But it doesn't strictly prevent it. An attacker could perform the protocol with both Alice and Bob separately, then guard the line and tamper with any communications on that channel that attempt to confirm the shared secret. The attacker couldn't win against a determined Alice and Bob, though, because A&B could theoretically use other channels, or even publicly broadcast some confirmation of the shared secret. So the smart attacker is "probably discouraged" from the noisy MITM.
But how many key exchange implementations actually use separate (availability ensuring) channels to confirm no one is in the middle? It's prohibitively expensive to verify no one's attacking something that no one ever attacks.
But then again, things no defender verifies are attractive to attackers. And some attackers are willing to get noticed 9 times out of 10 to land the one payout. "Tolerance for getting caught" is not always zero, that's another variable that complicates our Nash equilibrium here...
Do we even need to trust a router? Seems better to just use encryption for everything and trust nothing in the middle, only the endpoints. Serious question.
It's a good question, but also a begging question.
On the one hand, yes: keeping security out of the middle of the network and pushing it to the endpoints is something that The End To End Argument In System Design predicted several decades ago, and is (to my eyes) clearly the right design principle for this problem.
On the other hand, the endpoints doing the encryption are also going to be COTS equipment sourced from major industrial centers and acquired at a scale that probably precludes individual hardware verification, at least at the price point that enables their widespread deployment today.
There are things you can do in an evil router that encryption and integrity protection don't mitigate.
Adding unique delay variance patterns for packets from a specific set of packets (source address, payload type etc). This makes it easier to detect and follow interesting traffic.
You can also copy certain packets, divert packets, inducer errors (to cause resend that in turn triggers repetitions in higher layers that might be usable for info leakage from the protection mechanism.
The router really is your friendly, silent mitm helper gnome.
You have to trust that router delivers your packet to an endpoint. Encryption doesn't prevent that. And unless you're delivering extra packets to mask your control packets, it's trivial for the router to make drop decisions on packet flow data.
But dropping packets is not supposed to harm because it can happen anyway for various reasons. And a malicious router dropping packets is guaranteed to get some attention sooner or later.
It depends on the drop it causes. If it only increases the drop rate for certain streams than detection will be harder to notice. And of course the perf counters can't be trusted thus making the detection harder.
"And then you'd still need to inspect the hardware / software as shipped, right?"
Of course not. Just a small handful of people would have to scrutinize it and keep up a credible threat of catching out any skullduggery. These things are mass produced. The efficiency benefits of design-once/copy-many also translate into audit-once/benefit-many.
NSA has a program of intercepting shipments to targets and silently replacing the gear with identical (but backdoored) equipment. There's even a catalog of the equipment they have ready-made replacements for and a price sheet (presumably for internal cost accounting) that leaked a few months ago.
Like many private-sector security consulting companies, they also do security research - looking for vulnerabilities accidentally left behind by naive programmers. They leaked a catalog of exploits written for vulnerabilities discovered (or bought) but not disclosed. Failure to disclose these is a violation of its mission to protect the security of US infrastructure, but I can't say I'm surprised that an intelligence agency that pays hackers has some exploits to show for it.
Aside from doubt cast onto the validity of NSA's advice on cryptography standards, there is not evidence that NSA actually introduces backdoors into the design of mass-produced products.
If you're not interesting enough for the NSA to physically intercept your package from, say, Cisco, (or, for the more cynical, ask Cisco to put the "special" version of IOS on your router) your inspection of the gear says nothing about what's running on an apparently identical unit headed for a foreign government.
Granted, "only" targeting a handful communications companies may well give them access to most of the world's communications, but this "NSA is deliberately backdooring everything" business is vastly exaggerated from the evidence.
Like I said below, you can improve a situation without outright fixing it.
I'm well aware of the program you're refering to. Have you seen some of the unit costs? That doesn't even include operating costs. The US is already near-bankrupt! Intercepting shipments with look-alike models doesn't really scale to mass surveillance, which is kind of a key point.
Tell me how that question relates in any way to the statement I was making.
Stating that "...you'd still need to inspect the hardware/software as shipped..." seems to suggest that it's necessary to check each and every unit of a given design. I was just addressing that part of the parent comment directly, along with it's general bias towards what can't be done as opposed to what can.
I wasn't trying to suggest that open codebases are a silver bullet -- but you can improve something without having a complete and comprehensive solution.
Open-source designs (for software or hardware) aren't a complete solution. You need to regularly audit the end-result, lest the compiler/fab add unwanted nasties to your design.
This doesn't do anything. NSA is intercepting specific shipments. Unless you happen to be a target of the same program, what you're auditing is not the same as what the target received.
What bitstream is the FPGA currently configured with? The one in flash? Really? What's to say the bitstream made it to the FPGA correctly? Can you tell? What if it reconfigured itself [1]?
There's some research [2] into using authenticated, encrypted bitstreams, but even if the implementation matches the theory (and after all, it's crypto, we know how that goes...) this only reaches the same level of security as a fixed-configuration ASIC, since FPGAs are vulnerable to the same nefarious fab attacks as ASICs.
FPGA bitstreams are big, sometimes even big enough that you couldn't fit another bitstream in hardware anywhere. What's more, making even a small change to the bitstream and re-synthesising tends to completely change how the design is laid out in hardware - so launching an attack that targets a specific bitstream, like most of the obvious nefarious-fab attacks, isn't much good.
You have to trust someone, but we can move in either direction along the spectrum from "I have to trust all of these many people and if any of them is untrustworthy I'm boned" to "I have to trust some of these many people and if all of them are untrustworthy I'm boned". There are both technological and political elements to any solution here.
Regularly? Almost any major software/hardware project changes iteratively over time. You just have to audit the changes between major releases, which for firmware, are typically very irregular.
Also, just transitioning to a more open process with an open codebase has the effect of "keeping them honest". Once the code is out there, they have no way of knowing who might be scrutinizing it and have no chance to retroactively cover something up.
This is why it's such a big deal when companies make a commitment to openness (the real kind -- not the buzzword kind). It's a statement of being willing to forego most of the dirty tactics that being closed source allows and to playing a fair(er) game.
Open source is not the whole story either. A decade ago I felt pretty smug with GNU/Linux; these days it has become such a large target that you can no longer ignore the countless non-open components in every system. And with things like heartbleed it's not clear that we're even defended against stochastic attackers, never mind intelligent ones. (apologies for beating the freshest dead horse)
The topic keeps coming up from time to time to time, since it's not a straightforward thing to fix, but only manifests itself with publicity. I've written a few comments before as to what I see are some of ultimate contours:
Open source depends on someone looking at the code. If Heartbleed proved anything, it is that most open source projects are pretty thinly funded and manned, and very few users ever bother to look at the source.
Also those which have broken architecture and bad attitudes toward end-users. OpenSSL fits this label, but so do KDE and GNOME, IMO. Systemd as well, among the reasons I regard it so critically.
RMS reminds me of Ignaz Semmelweis. He is correct, but people stick their fingers in their ears and try to reject what he is saying because of how he delivers his message and because they are not comfortable with the implications of what he is saying.
The NSA is intercepting shipments of routers, and installing their own... stuff in them. I don't know if the stuff is hardware or just software/firmware.
How would 'open source' protect you from this? If the 'stuff' is firmware, then reflashing your own firmware after you get the device would protect you from it, but it doesn't particularly matter if the firmware you flash is open source or not. If the 'stuff' is hardware, then only someone inspecting the insides who's qualified to detect such hardware would protect you, and it's still got not much to do with open source.
Not unless you are only downloading pre-compiled binaries.
Some linux distributions, for instance, try very hard to only every download source from the internet, and compile it locally. This is very slow (some things can take a very long time to compile).
What downloaded binaries? The question was, is it really useful to have the source and schematics for the chips on your hardware, if NSA (as described in the article) is modifying the hardware during shipping, so you don't really know if the chip you have does what it's supposed to.
Hardware based rootkits can do pretty much everything to an OS running on that, and be undetectable from that OS.
my 2c