One thing which (as an independant person, who isn't doing any of the work!) is it often feels like in order to 'win', people are expected to find a full chain which gives them remote access, rather than just finding one issue, and getting it fixed / getting paid for it.
It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape. Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?
There are many wannabe security researchers who find issues that are definitely not exploitable, and then demand CVE numbers and other forms of recognition or even a bounty. For example, there might be an app that crashes when accepting malformed trusted input, but the nature of the app is that it's never intended to and realistically never will be exposed to an adversary. In most people's eyes, these are simply bugs, not security bugs, and while are nice to fix, aren't on the same level. It's not very difficult to find one of these!
So there is a need to differentiate between "real" security bugs [like this one] and non-security-impacting bugs, and demonstrating how an issue is exploitable is therefore very important.
I don't see the need to demonstrate this going away any time soon, because there will always be no end of non-security-impacting bugs.
So many "Security Researchers" are just throwing ZAP at websites and dumping the result into the security@ mail, because there might be minor security improvements by setting yet another obscure browser security header for cases that might not even be applicable.
Or there is no real consideration if that's actually an escalation of context. Like, "Oh if I can change these postgres configuration parameters, I can cause a problem", or "Oh if I can change values in this file I can cause huge trouble". Except, modifying that file or that config parameter requires root/supervisor access, so there is no escalation because you have full access already anyhow?
I probably wouldn't have to look at documentation too much to get postgres to load arbitrary code from disk if I have supervisor access to the postgres already. Some COPY into some preload plugin, some COPY / ALTER SYSTEM, some query to crash the node, and off we probably go.
But yeah, I'm frustrated that we were forced to route our security@ domain to support to filter out this nonsense. I wouldn't be surprised if we miss some actually important issue unless demonstrated like this, but it costs too much time otherwise.
> There are many wannabe security researchers who find issues that are definitely not exploitable, and then demand CVE numbers and other forms of recognition or even a bounty
I believe this has happened to curl several times recently.
This shows kind of a naive view of security. Many soft targets are believed to be safe from adversaries that absolutely are not and are used for escalation chains or access to data.
Hospitals often try to make this argument about unsecure MySQL connections inside their network for example. Then something like heart bleed happens and lo and behold all the "never see an adversary" soft targets are exfil.
> Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?
Let me give you a different perspective.
Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data. This is by design, users can serialise and deserialise anything, including lambda functions. My library is only intended for processing data from trusted sources.
To my knowledge, nobody uses my library to process data from untrusted sources. One popular library does use mine to load configuration files, they consider those a trusted data source. And it's not my job to police other people's use of my library anyway.
Is it correct to file a CVE of the highest priority against my project, saying my code has a Remote Code Execution vulnerability?
I think that if the documented interface of your library is "trusted data only", then one shouldn't even file a bug report against your library if somebody passes it untrusted data.
However, if you (or anybody else) catch a program passing untrusted data to any library that says "trusted data only", that's definitely CVE worthy in my books even if you cannot demonstrate full attack chain. However, that CVE should be targeted at the program that passes untrusted data to trusted interface.
That said, if you're looking for bounty instead of just some publicity in reward for publishing the vulnerability, you must fullfil the requirements of the bounty and those typically say that bounty will be paid for complete attack chain only.
I guess that's because companies paying bounties are typically interested in real world attacks and are not willing to pay bounties for theoretical vulnerabilities.
I think this is problematic because it causes bounty hunters to keep theoretical vulnerabilities secret and wait for possible future combination of new code that can be used to attack the currently-theoretical vulnerability.
I would argue that it's much better to fix issues while they are still theoretical only. Maybe pay lesser bounty for theoretical vulnerabilities and pay reduced payment for the full attack chain if it's based on publicly known theoretical vulnerability. Just make sure that the combination pays at least equally good to publishing full attack chain for 0day vulnerability. That way there would be incentive to publish theoretical vulnerabilities immediately for maximum pay because otherwise somebody else might catch the theoretical part and publish faster than you can.
> Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data
No need to imagine, the PyYAML has that situation. There have been attempts to use the safe deserialization by default, with an attempt to release a new major version (rolled back), and it settled on having a required argument of which mode / loader to use. See: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=PyYAML
That sounds... familiar. Are you perchance the maintainer of SnakeYAML?
Yes, it is correct to file a CVE of the highest priority against your project, because "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.
If it's your toy project that you never expected anyone to use anyway, you don't care about CVEs. If you want to be taken seriously, you cannot play pass-the-blame and ignore the fact that your policy turns the entire project into a security footgun.
> "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.
Truly, it's a design decision so ridiculous nobody else has made it. Except Python's pickle, Java's serialization, Ruby's Marshal and PHP's unserialize of course. But other than that, nobody!
I know "code is data", but it's a couple orders of magnitude more reasonable to have unsafe bytecode than to have unsafe data deserialization.
If something is supposed to load arbitrary code, not just data, that needs to be super clear at a glance. If it comes across as a data library, but allows takeover, you have a problem. Especially if there isn't a similar data-only function/library.
Having been on the reporting side, "an exploitable vulnerability" and "security weakness which could eventually result in an exploitable vulnerability" are two very different things. Bounties always get paid for the first category. Reports falling in the second category might even cause reputation/signal damage for a lack of proof of concept/exploitability.
There are almost always various weaknesses which do not become exploitable until and unless certain conditions are met. This also becomes evident in contests like Pwn2Own where multiple vulnerabilities are often chained to eventually take the device over and remain un-patched for years. Researchers often sit on such weaknesses for a long time to eventually maximize the impact.
One thing which (as an independant person, who isn't doing any of the work!) is it often feels like in order to 'win', people are expected to find a full chain which gives them remote access, rather than just finding one issue, and getting it fixed / getting paid for it.
It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape. Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?