I understand the frustration, and I'm pretty sure the root cause is straightforward ("number of CVEs generated" is a figure of merit in several places in the security field, especially resumes, even though it is a stupid metric).
But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.
If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.
this sounds similar to problems with peer review in academia. it mostly works fine as a guardrail to enforce scholarly norms.
however many institutions want to outsource responsibility for their own high-stakes decisions to the peer review system. whether it's citing peer-reviewed articles to justify policy, or counting publications to make big hiring decisions.
It introduces very strong incentives to game the system -- now getting any paper published in a decent venue is very high-stakes, and peer review just isn't meant for that -- it can't really be made robust enough.
i don't know what the solution is in situations like this, other than what you propose -- get the outside entities to take responsibility for making their own judgments. but that's more expensive and risky for them, so why would they do it?
It feels kind of like a public good problem but I don't know what kind exactly. The problem isn't that people are overusing a public good, but that just by using it at all they introduce distorting incentives which ruins it.
My basic take is: if "CVE stuffing" bothers you, really the only available solution is to stop being bothered by it, because the incentives don't exist to prevent it. People submitting bogus or marginal CVEs are going to keep doing that, and CNAs aren't staffed and funded to serve as the world's vulnerability arbiters, and even if they were, people competent to serve in that role have better things to do.
The problem is the misconception ordinary users have about what CVEs are; the abuses are just a symptom.
I suspect for both peer review and CVEs, and probably some similar situations I'm not thinking of, it's not just a misconception, it's often more like wishful thinking.
People really want there to be a way of telling what's good and important that doesn't cost them any money or effort. Ironically these systems can sort-of work for that purpose, only if people don't try to use them for that purpose.
But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.
If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.
(Don't get me started on CVSS).