Yeah, but you'd presumably need to have open source hardware shipped to you, right? And then you'd still need to inspect the hardware / software as shipped, right? RMS was right about a lot of things, but remember that his most important ideas are still waiting on lots of other things to happen first before they become practical.
Not perfect but the alix boards[1] ship with a full board schematic. They also ship with the source code for the bios and some information that would help you write your own. All the major open source router projects work on them. 6f2 even supports 3g modems so you can multi-home your connection.
I know it is not perfect (you still have to trust the AMD chips and so forth) but it seems a lot better than just buying a linksys and hoping it works with OpenWRT.
Unless you're willing to 10x your integration costs by building a reliable verification process --- something that even specialists in hardware verification don't actually have --- and then painstakingly applying it to every piece of equipment you receive, full board schematics don't help you. For that matter, "free software" and "open source" are probably providing a false sense of security.
For some hardware attacks, like transistor-level dopant mask swaps, there isn’t any reliable way to detect them, not even optical inspection (because the layout is unchanged) nor functional testing (because passing BIST and external benchmark results can be faked). See the paper from UMass: https://people.umass.edu/gbecker/BeckerChes13.pdf
Since the “detection” I’m referring to is already extremely difficult before the chip even leaves the legitimate chip manufacturer’s facility, what hope could someone have of opening a modern IXP-scale router and determining if any of the zillion chips inside has been trojaned by double-0-mailman?
I think we're operating from the assumption that the fab is itself not compromised; if you think it might be, you're right, all is lost. But I think we're converging on the same point: all is lost anyways.
I know you're a noted security researcher but it seems like you're write about security as if it were a binary "secure or insecure".
How do your comments in this thread relate to the fact that nothing can ever be perfect, and different degrees of sophistication in security can only ever reduce the probability of an attacker's success, or the percentage of attackers that make it through everything?
This raises an interesting point about "tamper revealing" methods of security. Open source, publicized hashes of executables, even key exchange protocols, all of these prevent attacker tampering by making it more likely attacks (or even just bad code) will get noticed. They don't strictly prevent tampering, they discourage it.
Take DHKE. It "defends" against MITM. But it doesn't strictly prevent it. An attacker could perform the protocol with both Alice and Bob separately, then guard the line and tamper with any communications on that channel that attempt to confirm the shared secret. The attacker couldn't win against a determined Alice and Bob, though, because A&B could theoretically use other channels, or even publicly broadcast some confirmation of the shared secret. So the smart attacker is "probably discouraged" from the noisy MITM.
But how many key exchange implementations actually use separate (availability ensuring) channels to confirm no one is in the middle? It's prohibitively expensive to verify no one's attacking something that no one ever attacks.
But then again, things no defender verifies are attractive to attackers. And some attackers are willing to get noticed 9 times out of 10 to land the one payout. "Tolerance for getting caught" is not always zero, that's another variable that complicates our Nash equilibrium here...
Do we even need to trust a router? Seems better to just use encryption for everything and trust nothing in the middle, only the endpoints. Serious question.
It's a good question, but also a begging question.
On the one hand, yes: keeping security out of the middle of the network and pushing it to the endpoints is something that The End To End Argument In System Design predicted several decades ago, and is (to my eyes) clearly the right design principle for this problem.
On the other hand, the endpoints doing the encryption are also going to be COTS equipment sourced from major industrial centers and acquired at a scale that probably precludes individual hardware verification, at least at the price point that enables their widespread deployment today.
There are things you can do in an evil router that encryption and integrity protection don't mitigate.
Adding unique delay variance patterns for packets from a specific set of packets (source address, payload type etc). This makes it easier to detect and follow interesting traffic.
You can also copy certain packets, divert packets, inducer errors (to cause resend that in turn triggers repetitions in higher layers that might be usable for info leakage from the protection mechanism.
The router really is your friendly, silent mitm helper gnome.
You have to trust that router delivers your packet to an endpoint. Encryption doesn't prevent that. And unless you're delivering extra packets to mask your control packets, it's trivial for the router to make drop decisions on packet flow data.
But dropping packets is not supposed to harm because it can happen anyway for various reasons. And a malicious router dropping packets is guaranteed to get some attention sooner or later.
It depends on the drop it causes. If it only increases the drop rate for certain streams than detection will be harder to notice. And of course the perf counters can't be trusted thus making the detection harder.
"And then you'd still need to inspect the hardware / software as shipped, right?"
Of course not. Just a small handful of people would have to scrutinize it and keep up a credible threat of catching out any skullduggery. These things are mass produced. The efficiency benefits of design-once/copy-many also translate into audit-once/benefit-many.
NSA has a program of intercepting shipments to targets and silently replacing the gear with identical (but backdoored) equipment. There's even a catalog of the equipment they have ready-made replacements for and a price sheet (presumably for internal cost accounting) that leaked a few months ago.
Like many private-sector security consulting companies, they also do security research - looking for vulnerabilities accidentally left behind by naive programmers. They leaked a catalog of exploits written for vulnerabilities discovered (or bought) but not disclosed. Failure to disclose these is a violation of its mission to protect the security of US infrastructure, but I can't say I'm surprised that an intelligence agency that pays hackers has some exploits to show for it.
Aside from doubt cast onto the validity of NSA's advice on cryptography standards, there is not evidence that NSA actually introduces backdoors into the design of mass-produced products.
If you're not interesting enough for the NSA to physically intercept your package from, say, Cisco, (or, for the more cynical, ask Cisco to put the "special" version of IOS on your router) your inspection of the gear says nothing about what's running on an apparently identical unit headed for a foreign government.
Granted, "only" targeting a handful communications companies may well give them access to most of the world's communications, but this "NSA is deliberately backdooring everything" business is vastly exaggerated from the evidence.
Like I said below, you can improve a situation without outright fixing it.
I'm well aware of the program you're refering to. Have you seen some of the unit costs? That doesn't even include operating costs. The US is already near-bankrupt! Intercepting shipments with look-alike models doesn't really scale to mass surveillance, which is kind of a key point.
Tell me how that question relates in any way to the statement I was making.
Stating that "...you'd still need to inspect the hardware/software as shipped..." seems to suggest that it's necessary to check each and every unit of a given design. I was just addressing that part of the parent comment directly, along with it's general bias towards what can't be done as opposed to what can.
I wasn't trying to suggest that open codebases are a silver bullet -- but you can improve something without having a complete and comprehensive solution.