The statement itself is basically 98% false. I've been a Coverity user since very early days, and have used a few other static-analysis tools as well. Every such tool that I've seen runs multiple separate kinds of checks. Yes, the false positive rate for some of those checks can be alarmingly/annoyingly high. OTOH, any software developer with half a brain can see that other checks are much more accurate. Some are darn near impossible to fool. If you focus on those, you can find and fix a whole bunch of real bugs without too much distraction from false positives.
Her statement gains 1% truth because Oracle might already have picked the low-hanging fruit, and any more reports they get really are full of chaff. I find this unlikely, but it's possible. She gets another 1% for this.
> A customer can’t analyze the code to see whether there is a control that prevents the attack
That's actually a pretty decent point. Anyone who has actually studied static-analysis reports for any length of time has probably encountered this phenomenon. For example, you might find a potential buffer overflow that's real in the context of the code you analyzed, but the index involved can't actually be produced because of other code that you didn't. Or maybe a certain combination of conditions is impossible for reasons related to a mathematical property that has been thoroughly vetted but that the analysis software couldn't reasonably be expected to "know" about. Ironically, these kinds of "reasonable false positives" tend to show up more in good programmers' code, because they're diligent about adding defensive code handling every condition - including conditions that aren't (currently) possible. In any case, while it's a good point, it's applicable rarely enough that it doesn't really support the author's broader position.
This is diametrically the opposite of my experience with source code scanners.
I think the impedance mismatch here might be that you're a software developer, and we're talking about security teams.
I don't know that anyone is arguing that static analysis is useless for developers. If you're intimately familiar with the code you're working on, there are probably a lot of ways to make static analysis results both valuable in every edit/compile/debug cycle, and an important part of your team's release process.
But when you're close to the code, it's easy to forget how much of the tool's output you're ignoring (either literally, by just skimming past findings you know don't matter, or implicitly, by configuring the tool to match your environment or subtly changing your coding style to conform to Coverity's expectations).
Security teams can't generally do this. They're stuck with the raw output of the barely-configured tool. The results of static analysis in these circumstances is nonsensical: memory leaks, uninitialized variables, race conditions, tainted inputs reaching SQL queries, improper cleanup of sensitive variables, 99.9% of which aren't valid findings, but all of which look super important, especially if you're consultant with 6 months of experience charging $150/hr to run Fortify on someone else's code, then petulantly demanding a response for every fucking issue the scanner generates.
They're fine dev tools, but they are terrible tools for adversarial inspection, which is what Davidson is talking about.
If somebody's paying a consultant hundreds of dollars an hour to run a static analysis tool and forward the output, without applying a developer's skills in between, they've been defrauded. Static analyzers are coding tools, much like compilers. Their input is code. Their output is pointers to code. True adversarial analysis, or any other endeavor involving static analysis, requires something extremely close to a coder's skill set. I guess if I believed otherwise then I might be tempted to take Davidson's side too, but that's not the case.
Her statement gains 1% truth because Oracle might already have picked the low-hanging fruit, and any more reports they get really are full of chaff. I find this unlikely, but it's possible. She gets another 1% for this.
> A customer can’t analyze the code to see whether there is a control that prevents the attack
That's actually a pretty decent point. Anyone who has actually studied static-analysis reports for any length of time has probably encountered this phenomenon. For example, you might find a potential buffer overflow that's real in the context of the code you analyzed, but the index involved can't actually be produced because of other code that you didn't. Or maybe a certain combination of conditions is impossible for reasons related to a mathematical property that has been thoroughly vetted but that the analysis software couldn't reasonably be expected to "know" about. Ironically, these kinds of "reasonable false positives" tend to show up more in good programmers' code, because they're diligent about adding defensive code handling every condition - including conditions that aren't (currently) possible. In any case, while it's a good point, it's applicable rarely enough that it doesn't really support the author's broader position.