Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless
Fair enough. The kernel maintainers are probably much more aware of this than the average open source project. Maybe for some projects it would change their mindset from knowing that this could be happening, to knowing that this will be happening.
Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.
Edit: It would be about as useful as pen testing that includes social engineering. That is to say, everyone knows there are dishonest people, but they may not be aware of some of the techniques they use.
> Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.
It did do that, at least twenty years ago. Static analysis tooling is a huge, active area of research and the kernel is frequently a target of that research. Ditto for other areas like language development (see the recent work on getting Rust into the kernel). If these students had tried making real contributions to those areas, I'm sure they would have been welcome. But that kind of work is difficult and requires real research and development, which these students aren't interested in and/or capable of. So we got this trash instead, and now hopefully you understand the harsh reaction to it.
Yeah. I'm not trying to come down hard on you or anything, I just feel like a common reaction to this research is, "ethics aside, didn't they point out a real vulnerability?" And I want to make it crystal clear that, no, they didn't. Their research was entirely without value.
If you put ethics aside, then yes there was value. Failed research is most valuable, while successful research is usually quite worthless due to the bias toward finding whatever researchers already believe.
I do wonder if they would have published if they didn't expect disclosure by angry Linux maintainers, i.e. if they really believe they were weakly successful. Generally, I think this is the type of finding that normally gets lost if there's no pre-disclosure.
> If you put ethics aside, then yes there was value. Failed research is most valuable, while successful research is usually quite worthless due to the bias toward finding whatever researchers already believe.
True as far as it goes, but the most valuable failing research is that which fails to produce an expected positive result.
Next most valuable is failing to corroborate a novel hypothesis (which is probably what you meant).
When you get an expected result, you haven't learned much, regardless of whether you were (or should have been) expecting success OR failure (with the exception being getting more, or more accurate, data that narrows error bars).
The ideas in the paper were novel, perhaps (didn't check) but there was far more to learn from looking at reviews where bugs did slip by than doing their experiment. You could probably calculate the bug acceptance rate by category of bug, making a few random data points does not help science here
I think the prevailing belief, and belief they must have had to try their research was the belief that a good percentage of attempts would slip through initial review to be caught at a later stage. I fail to see what your alternative research would do to that apparently false belief other than reinforce it.
Fair enough. The kernel maintainers are probably much more aware of this than the average open source project. Maybe for some projects it would change their mindset from knowing that this could be happening, to knowing that this will be happening.
Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.
Edit: It would be about as useful as pen testing that includes social engineering. That is to say, everyone knows there are dishonest people, but they may not be aware of some of the techniques they use.