I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.
I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?
As mentioned in the article, every bug is potentially a security problem to someone.
If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.
The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).
The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue. Security researchers unaffiliated not impacted by any such issue still report it this way (eg Project Zero reporting issues that don’t impact Google at all).
Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .
> The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue.
You can already say that for the majority of the bugs being fixed, and I think that's one of the points: tagging certain bugs as exploitable make it seem like the others aren't.
More generally, someone's minor issue might be a major one for someone else, and not just in security. It could be anything the user cares about, data, hardware, energy, time.
Perhaps the real problem is that security is just a view on the bigger picture. Security is important, I'm not saying the opposite, but if it's only an aspect of development, why focus on it in the development logs? Shouldn't it be instead discussed on its own, in separate documents, mailing lists, etc by those who are primarily concerned by it?
Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios? Thats clearly not what’s happening meaning security bugs are still differently recorded and described through omission.
However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers. That explains why known exploitable bugs are reported differently before hand and described differently after the fact in the commit logs. That wouldn’t happen if “a bug is a bug” was actually a genuinely held position.
> However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers.
And on top of your other concerns, this quoted bit smells an awful lot like 'security through obscurity' to me.
The people we really need to worry about today, state actors, have plenty of manpower available to watch every commit going into the kernel and figure out which ones are correcting an exploitable flaw, and how; and they also have the resources to move quickly to take advantage of them before downstream distros finish their testing and integration of upstream changes into their kernels, and before responsible organizations finish their regression testing and let the kernel updates into their deployments -- especially given that the distro maintainers and sysadmins aren't going to be moving with any urgency to get a kernel containing a security-critical fix rolled out quickly because they don't know they need to because *nobody's warned them*.
Obscuring how fixes are impactful to security isn't a step to avoid helping the bad guys, because they don't need the help. Being loud and clear about them is to help the good guys; to allow them to fast-track (or even skip) testing and deploying fixes or to take more immediate mitigations like disabling vulnerable features pending tested fix rollouts.
There are channels in place to discuss security matters in open source. I am by no mean an expert nor very interested in that topic, but just searching a bit led me to
There’s lot of different kinds of bad guys. This probably has marginal impact on state actors. But organized crime or malicious individuals? Probably raises the bar a little bit and part of defense in depth is employing a collection of mitigations to increase the cost of creating an exploit.
> Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios?
I don't know nor follow kernel development well enough to answer these questions. My point was just a general reflection, and admittedly a reformulation of Linus's argument, which I think is genuinely valid.
If you allow me, one could frame this differently though: is the memory leak the symptom or the problem?
Indeed nobody does that, because it would just be pointless, it doesn't expose the real issue. Is a security vulnerability a symptom, or the real issue though? Doesn't it depends on the purpose of the code containing the bug?
> I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.
It doesn't work. I've looked at the kernel commit log and found vulnerabilities that aren't announced/ marked. Attackers know how to do this. Not announcing is a pure negative.
Linus argument against labeling some bugs, or even lack of features, as security vulnerabilities, is that all bugs can, with enough work and together with other circumstances, be a security vulnerability. Essentially every commit would need to be labeled as a cve fix, and then it’s just extra work for nothing.
> Linus argument against labeling some bugs, or even lack of features, as security vulnerabilities, is that all bugs can, with enough work and together with other circumstances, be a security vulnerability.
This isn't true though. Some bugs are not exploitable, some are trivial to exploit. Even if sometimes we'd end up with a DoS that was actually a privesc, how does that make it pointless to label the ones we know are privescs as such?
You can argue "oh no sometimes we mislabeled a DoS" but most of the time you can tell when something is going to be a powerful vuln or not ahead of time, I think this is a red herring to optimize around.
> Essentially every commit would need to be labeled as a cve fix, and then it’s just extra work for nothing.
This isn't true and has never been true for any other project. There are issues with the CVE system, this is not one of them. Note that the Linux kernel is the standout here - we don't have to guess about issues in the CVE system, we observe them all the time. "We need a CVE for every commit" is not one of them.
I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?