Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a command-line development tools maintainer for an OS. I am not unfamiliar with high-level CVEs in my inbox with the likes of "gdb crashes on a handcrafted core file causing a DoS". I am unfamiliar with a real world in which a simple old-fashioned segfault in a crash analysis tool is truly a denial of service security vulnerability, but our security department assures us we need to drop all revenue work and rush out a fix because our customers may already be aware that our product is shipping with a known CVE.

There are occasions in which I recognize a CVE as a vulnerability to a legitimate possible threat to an asset. By and large, however, they seem to be marketing material for either organizations offering "protection" or academics seeking publication.

I think like anything else of value, inflation will eat away at the CVE system until something newer and once again effective will come along.



Ah yes, this also fits with the famous "no insecure algorithms" in which an auditor will check a box if your use md5, even if for a feature totally unrelated to security.


In fairness, those sorts of features tend to be subject to scope creep where they start being used for security.

For instance, Linus Torvalds (a very smart person) resisted using something stronger than SHA-1 for Git because he said the purpose of hashes isn't security, it's content-addressable lookup of objects. Which may have been true at the time, but then Git added commit signing. Now if you sign a commit, no matter how strong of an algorithm you use, the commit object references a tree of files via SHA-1. Git is currently undergoing an extremely annoying migration to support new hash algorithms, which could have been avoided.

Also, BLAKE3 is faster than MD5 and also far more secure, so if you're saying "It's okay I'm using MD5 because I want a faster hash and SHA-256 is too slow," there are options other than SHA-256.

If the thing you're trying to hash really really isn't cryptographic at all, you can do a lot better than MD5 in terms of performance by using something like xxHash or MurmurHash.

So, even if it isn't a security vulnerability, using MD5 in a new design today (i.e., where there's no requirement for compatibility with an old system that specified MD5) is a design flaw.


> Also, BLAKE3 is faster than MD5 and also far more secure, so if you're saying "It's okay I'm using MD5 because I want a faster hash and SHA-256 is too slow," there are options other than SHA-256.

True, but BLAKE3 isn't shipped as part of the standard library of many (any?) languages, whereas MD5 is. There are third-party implementations for a lot of languages, but using one of these brings up a lot of problems:

1. Are you sure the implementation doesn't have any bugs? AFAIK, the BLAKE3 team has only created C and Rust implementations, so anything else likely hasn't received the same level of care.

2. How are you going to notified of bugs or vulnerabilities in the implementation? For your language's standard library, it's usually easy to get notified of any bugs or vulnerabilities, but you're probably not going to get that from some random implementation on Github.

3. Pulling in the dependency can be an attack vector in itself. For example, if you use the Javascript implementation on NPM, you're now going to have worry about the NPM author having their account compromised and the package replaced with malicious code.


That's fair, I should have added that as an exception too. Another similar case: you're writing a shell script and you can assume the target machines all have md5sum installed but not necessarily b3sum.


Our security team at a previous employer previously added a systemwide checker to our github enterprise installation that would spam comments on any change to a file in which Math.random is used. The idea is that anyone using random numbers must be implementing a cryptographic protocol and therefore should not be using Math.random as it's not a CSPRNG.

So all the AB tests, percentage rollouts etc. started getting spam PR comments until they were made to turn it back off again.

Frankly if a teammate was writing their own crypto algorithm implemntation in the bog standard web app we working on, that would be more concerning than which RNG they're using.


I've seen exactly this many times in audits (gets them a high score!). If they flag it and not check the usage I know they didn't bother putting anyone good on the audit or only ran automated stuff and it is pretty much useless. The same can now be said for sha1, gets them results quickly and looks good on the final report.


Related, Apple marks any use of MD5 with a warning if you use their SDKs. Good luck getting rid of it if you’re using Swift, because the community has not yet decided whether silencing warnings is something they would like in the language or not. I’m getting kind of sick of using dlsym to fish out the function pointer :(


There’s the “executive” level of this stupidity where an app replaces their md5 OpenSSL calls with their own internal copy pasta of the function.

Look ma! We’re FIPS compliant now!


Unfortunately, that happens because most regulations try to enforce a black-and-white rulebook, which is easy on the auditors but extremely difficult on those being audited.

I now thinks most compliance regulations are by auditors for auditors... :-D :-D


TBH, if you are doing security with it, it's obviously wrong, but if you are not, it also is because way better (faster) options exist for non security usage...


[flagged]


It's sad, however, when a highly non-exploitable crash is treated as a five alarm fire while a "silently corrupts users data" falls to the wayside because people don't generally write security vulnerability reports for those.

I've heard from some people that they have considered filing security CVEs against non-security but high user impact bugs in software that they're working on, just to regain control of priorities from CVE spammers.


Agree, but having to make these judgement calls at all is a mistake. We need to get to 'just fix it'.


I don't quite get what you mean.

There's finite time and developer effort. You always have to make judgement calls about what to prioritize over what, you can't "just fix it" for literally everything unless you're in a very fortunate position of working in a codebase with minimal tech debt, a mature scope, and sufficient developer-hours.

If you're saying that the CVEs that amount to "update dependency X" or whatever should be "just automatically fixed" rather than have to be prioritized, I agree that should be true for a subset of em... But not every dependency update or CVE resolution is trivial, and even the supposedly trivial ones may still require a certain amount of testing or refactoring.


If the codebase is sufficiently complex, irrespective how mature and tech-debt-free, certain dependency upgrades are simply non-trivial (this includes the testing effort as well as the actual upgrade effort). Like they say "there are no small changes in a big system".

So resolving certain CVEs' is simply a delicate balance to be had between the actual damage potential and the amount of effort.


Non exploitable crash can be a denial of service, given the right configuration. Ie, filling up disk core file storage, crashing at the right time can force expensive operations to retry/rollback.


This is exactly the attitude we’re talking about. Ok, if you do a bunch of things maybe it could make the service throw a disk usage warning email your way. But a service that is actually crashing now is obviously quite a bit more important.


No, this is exactly the incorrect definition of the problem that vendors talking about.

Crashing a single users thread on a web service is the definition of useless, its not annoying anyone but the attackers session.


"Never fix it" is one extreme.

"Drop all revenue work and rush out a fix" is another.

The previous poster didn't say it should never get fixed, but rather that there's some nuance to be had in these things, and that fixing it in e.g. the next release is usually just fine too.


No disagreement here. What is dangerous for me is the idea that difficulty upgrading for security fixes does not predict the same difficulty for other fixes. It's not that security bugs are uniquely hard to patch, it's that dependency management on the whole is neglected and security gets the blame.

Those crusty old dependencies and the processes around them are an operational risk, we should be lowering the bar to just patching rather than picking and choosing.


You are assuming that this is about dependencies. OP's example is explicitly "gdb crashes when opening on a malformed core dump and can be used for DoS". If you were working on GDB and got this bug report, would you consider it a fire to be put out immediately? Or would it be a low-impact bug to be looked at when someone gets some free time?

The OP is complaining that, if there is a CVE associated for whatever stupid reason, the bug suddenly jumps from a "might fix" to "hot potato".


That's fair


Who is talking about "crusty old dependencies"? Or processes which are an "operational risk"? The previous poster never mentioned any of those things.


They get old and crusty when you have to choose not to patch, or de prioritize those not so serious bugs because the operational cost is too high.

Developers shouldn't have to make this call, the cost should be zero.


I think you're making all sorts of assumptions and extrapolations here that I'm not really seeing any hints of. What I see is that someone is responsible for dealing with CVEs, judges its severity as they come in, and concludes that a lot of them are just cruft and not really worthy of a CVE as such. Nothing more, nothing less.


I see your point


> really a reflection of awful development practices.

You don't know a thing about GP's development practices so perhaps you should be a bit slower to hurl accusations.


It will probably be less effort to patch (increment version number) a non existing vulnerability than to explain it to every customer that comes with an report from a third party auditor.

CVEs for non-vulnerabilities is like corporate trolling




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: