At a previous workplace we had a few places in the code which used the word backdoor. It was not an actual backdoor though, but merely a debugging server that could be enabled and allowed you to inspect internal state during runtime. At some point I removed the word backdoor, fearing it would get to a customer or during an audit someone would misunderstand. :|
Once I got a complaint from a security auditor that some code was using MD5. It wasn’t being used for any security purpose, just to check whether an autogenerated file had been manually edited. We decided it was easier to do what they wanted than argue with them, so we replaced it with CRC32C. That would have been faster than MD5, but nobody cares about saving a few milliseconds off reading a configuration file at startup. It would have made the manual edit check somewhat less reliable, but probably not by much in practice. But the security auditor was happy we’d stopped using MD5
You don’t actually need to listen to auditors. People like you (who can’t be bothered to argue because it’s apparently too hard) is the reason that smartass is still selling their services.
You either have way more grit at arguing than most people or you haven't worked at a large and cumbersome organization.
I know most people at those kinds of organizations just don't have the grit to fight every one of those battles all over again, and choose to do the things they can affect with reasonable effort instead.
I'm not saying that grit would be a bad thing to have. I appreciate the people who do it. But you really can't know what kinds of situations the parent commenter was in, and sometimes you can't really expect everyone to want to fight it.
Sometimes the point isn't technical, but social. So MD5 isn't used for security purposes right now. At some point someone will want some hashing function, and they'll probably look at what the code already uses. The last thing you want is someone a bit clueless goi g "it was good enough there, it's good enough here" and using MD5 where they shouldn't. Removing it from a codebase helps with that problem.
The problem here is that people assume they know every possible reason why the auditor might ask for something, when they don't. If the auditor is asking for it, and it costs almost nothing to do, maybe just do it instead of wasting everyone's time by acting like you know the totality on the subject, and everyone will probably go home happier at the end of the day.
Isn't that what code review is for? To me that sounds like arguing against string formatting because someone could think it's ok for SQL queries.
An auditor's job doesn't end at saying what things should be changed, it should include why as well (granted, we don't know the full content of the auditor's report here, maybe they did say why).
The reason why CRC32C was chosen as a replacement instead of SHA-2 or whatever - what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again? Whereas, a CRC algorithm isn’t usually used for security purposes, so a security audit is far less likely to pay any attention to it. The whole issue started because a security-related technology was used for a non-security purpose.
> what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again
Then change it again? If you use the most recent available NIST standard it should hopefully be a very long time before meaningful (let alone practical) attacks materialize (if ever). If you end up needing to worry about that in a security audit, consider it a badge of success that your software is still in active use after so many years.
Using an insecure hashing algorithm without a clear and direct need is a bad idea. It introduces the potential for future security problems if the function or resultant hash value is ever used in some unforeseen way by someone who doesn't know better or doesn't think to check. Unless the efficiency gains are truly warranted (ex a hash map implementation, high throughput integrity checking, etc) it's just not worth it.
> a security-related technology was used for a non-security purpose
I would suggest treating all integrity checks as security-related by default since they have a tendency to end up being used that way. (Plus crypto libraries are readily available, free, well tested, generally prioritize stability, and are often highly optimized for the intended domain. Why would you want to avoid such code?)
Ahh poop, looks like I was out of date. Apparently a practical demonstration of an attack with complexity ~2^60 was recently demonstrated against legacy GPG (the v1.4 defaults) for less than $50k USD. [1] That being said, it looks like it still required ~2 months and ~900 GPUs versus MD5 at 2^18 (less than a second on a single commodity desktop processor).
So yeah, I agree, add SHA-1 to the list of algorithms to reflexively avoid for any and all purposes unless you have a _really_ good reason to use it.
The reason they ask is that they have to fill a checkbox that says "no MD5" and of course they're don't know that CRC32 is worse
And to be very fair, a lot of security issues would be caught with basic checkbox ticking. Are you using a salted password hashing function instead of storing passwords in plaintext? Are you using a firewall? Do you follow the principles of least privilege?
Because most times people aren't "just right", they're just unwilling to widen their point of view, and/or they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.
I don't need some coworker getting into some drawn out battle about how MD5 is fine to use when we can just use SHA (or CRC32C as that person did, which is more obviously non-useful for security contexts) and be done in 30 minutes. The auditor is there to do their job, and if what they request is not extremely invasive or problematic for the project, implementing those suggestions is your job, and arguing over pointless things in your job is not a sign of something I want in a coworker or someone I manage.
> they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.
This is exactly what the auditor is doing.
How can you not see the irony here?
> I don't need some coworker getting into some drawn out battle
This isn't a drawn out battle. This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
Whats fucking hard about that?
Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
The auditor was asked to do it and is being paid to do it. Presumably, the people arguing are paid to implement the will of those that pay them. At some point people need to stop arguing and do what they're paid to do or quit. Doing this over wanting to use MD5 seems a pretty poor choice of a hill to die on.
> This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
There are items like this all throughout life. Sure, you can be trusted to drive above the speed limit on this road, and maybe the speed limit is set a little low. But we have laws for a reason, and at some point you letting the officials know that the speed is two low and they really don't need to make it that low goes from helpful to annoying everyone around you.
> Whats fucking hard about that?
Indeed, what is so hard about just accepting that while you're technically correct that MD5 isn't a problem, you're making yourself a problem when you fight stupid battles nobody but you cares about, but everyone has to deal with?
> Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
Hardly. Pompous blowhards exist in every culture. Also, that's hilarious. Your talking about a culture that rebels against authority just because they think they that's what they're supposed to do, even if it's for stupid reasons and makes no sense. See the tens of millions of us that refuse to wear masks because it "infringes on our freedom".
I'm paid to tell idiots where to go. My boss doesn't pay me 6 figures to toe the line and fill in boxes. She pays me to use my judgement to move the company forward. I'm not wasting my time and her money on this sort of garbage and if they can't see the difference between casual use and secure use them we need to rethink our relationship with this company or they need to send us someone new.
> Your talking about a culture that rebels against authority
You just used the line "do what you're told or quit".
I've very specifically couched all my recommendations for this for when it's trivial to do. Arguing about this with someone instead of doing it, when doing it may have some benefits but really only costs a few minutes instead of just doing so is definitely wasting her time and money.
> You just used the line "do what you're told or quit".
I noted what I wished people would do in very specific cases where they're wasting way too much time and effort to win a stupid argument rather than make a small change of dubious, not possibly not zero, positive security impact.
I don't see anything weird about acknowleding some of the extreme traits of the culture I live in while also wishing they would change, at least in specific cases where I think they do more harm than good.
Honestly, I'm confused why you would even make some cognitive leap that since I live in an area with a specific culture I must act in the manner I described that culture, especially when I did it in a denigrating way. I guess you think all Americans must be the same? That doesn't seem a useful way to interact with people.
As a technical choice, that's true. So the argument shouldn't be hard to win, assuming you're dealing with reasonable people, who are also answering to reasonable people. Those people (e.g. the leadership) also need to care enough about that detail to just not dismiss your argument because making the change is not a problem for them. And they need to not be so security-oriented (in a naive way) as to consider a "safer" choice always a better one regardless of whether there's a reasonable argument for it or not.
That's more assumptions than it is sometimes reasonable to make.
"You don’t actually need to listen to auditors" is decidedly not true for a lot of people in a lot of situations, and arguing even for technically valid or reasonable things is an endurance sport in some organizations.
I mean, I even kind of want to agree with heavenlyblue's argument that you should fight that fight for the exact reason they're saying, and can see myself arguing the same thing years ago, but at least in case of some organizations, blaming people for taking skissane's stance would be disproportionate.
Oh sorry, I thought we were discussing working with rational people.
If you're working with irrational people you're going to have to do irrational things, but that's kind of a given isn't it? We don't really need to discuss that.
Not hard to win if everyone is being reasonable. Given an auditor that thinks all uses of MD5 are proscribed, what would you put the odds of them being reasonable at?
ETA: per 'kbenson it's not hard to conceive of a situation where proscribing MD5 is reasonable. Taking 'skissane's account at face value is probably reasonable, but my implicit assumption that the auditor would not explain if pressed isn't being charitable.
For now 10 years, I refuse to acknowledge the finding of the consulting company which flags the password scheme I use (passphrases) because the norm they use (a national one) talks about czps, symbols etc.
I refuse to sign off and note that our company is a scientific one and to the difference of the auditors, we understand math taught to 16 yo children.
This goes to the board who gets back to me, I still refuse on ethical gtounds and we finally pass.
This is sad that some auditors are stupid when some other are fantastic and that you depend on which one you get assigned.
Sometimes customers demand security audits as part of sales contracts. If it is a high enough value deal, the company may decide it is in their business best interest to say yes. In that scenario, not listening to the security auditor is not a viable option. You need to keep them onside to keep the customer onside.
Similarly, sometimes in order to sell products to government agencies you need to get security audits done. In that scenario, you have to listen to the security auditor and keep them onside, because if you don't keep them happy your ability to sell the product to the government is impeded.
I have a feeling that these auditor people just make up bullshit when they can't find something real. The last few we have got have come up with total non issues marked as severe because they are easy to "exploit".
Meanwhile I have been finding and fixing real security issues regularly. To be fair it would be extremely difficult for an external person to find issues in the limited time they have so the audit comes down to someone running through a list of premade checks to see if they find anything.
One thing I learned when I worked in internal IT security when dealing with auditors was that they will boil the ocean to find an issue, so never be perfect and leave a few relatively easy but not obvious to spot issues for them to write up that don't actually affect the security of your environment. If you don't leave them this bait, they will spend weeks to find a trivial issue (like using MD5 to check for config file changes vs password hashing) and turn it into a massive issue they won't budge on.
The other issue is that if you make it seem too easy to answer their questions or provide reports, they will only ask more questions or demand more reports so even if its just dumping a list of users into a CSV file for them to review, make it seem like way more effort than it actually is otherwise you might find you've been forced into a massive amount of busy work while they continue to boil the ocean.
Smart auditors ask for all items at the beginning of the audit.
Smart IT people give them all items at the end of the audit.
Auditors have only a limited time budget. The later they get answers, the less time for them is left for follow-up questions.
3D chess! I agree sometimes it feels as if the security review questions are just set-ups for follow-ups that they didn’t include in the initial form (for whatever reason)
I've had audits like that, many are just for CYA and I'm often the dev patching obscure (or not so obscure) security issues.
Honestly, I'm quite happy to have an auditor nitpick a few non-issues if the alternative is risking releasing an app that has a basic sql injection attack that wiggled past code review due to code complexity.
I've also had an external audit that found an unreported security issue in a new part of a widely used framework, so there are auditors out there that do a good job of finding legitimate things.
Some years ago I worked in $BIGBANK and auditor from $GOVERMENT told as to change street name property from textfield to dropdown (for all countries) to help them with fraud detection, and remove all diacritic characters from client names their new software don't like them.
I told my manager that they are idiots and I won't listen them, he was like 'OK, as I expected' never done anything about it, next auditors didn't mentioned it.
This makes me wonder about the reliability of address verification technology.
There are plenty of addresses where the official version in databases is slightly off from what people actually write on their mail. If I got a credit card transaction with the "official" version, that would be a significant fraud signal, that they were sourcing bogus data from somewhere.
So much this. My company just got done shelling out a ton of money for some asshat to tell me that we can't use http on a dev server. <head smashes through desk>
I actually think that's valid. Sure, http on a dev machine isn't a security risk. But there is a tail risk that it ends up somewhere on a system that sends data between machines. Also, using http on dev and https on prod can lead to unexpected bugs. Banning http is not unreasonable.
Same with the md5 complaint. That use of md5 wasn't a problem but there's a perfectly fine alternative and if you can ensure by automated tests that md5 is used nowhere, you also can guarantee that it's never used in a security relevant context.
> and if you can ensure by automated tests that md5 is used nowhere
You can automatically check for the string "md5" in identifiers, but you can't reliably automatically check for implementations of the MD5 algorithm. All it takes is for someone to copy-paste an implementation of MD5 and rename it to "MyChecksumAlgorithm" and suddenly very few (if any) security scanning tools are going to be smart enough to find it.
(Foolproof detection of what algorithms a program contains is equivalent to the halting problem and hence undecidable, although as with every other undecidable problem, there can exist fallible algorithms capable of solving some instances but not others.)
It's worse when the asshat convinces your manager that every internal site, whether dev or not needs https. Certs everywhere. Our team spends a decent % of our time generating and managing certs...
Are you talking about a fully internal site, with not even indirect Internet access? For those kinds of airgapped applications, you should maintain your own CA infrastructure, and update all clients/browsers to trust its certificates.
For the more common scenario of internal sites/services which are not accessible from the public Internet, but not fully isolated from it either:
You don't need the internal site exposed to the Internet. If you use DNS-01 ACME challenge, you just need to be able to inject TXT records into your DNS. Some DNS providers have a REST API which can make this easier.
Another option – to use HTTP-01 ACME challenge, you do need the internal host name to be publicly accessible over HTTP, but that doesn't mean the real internal service has to be. You could simply have your load balancer/DNS set up so external traffic to STAR.internal.example.com:80 gets sent to certservice.example.com which serves up the HTTP-01 challenge for that name. Whereas, internal users going to STAR.internal.mycompany.com talk to the real internal service. (There are various ways to implement this – split horizon DNS, some places have separate external and internal load balancers that can be configured differently, etc)
Yet another option is to use ACME with wildcard certs (which needs DNS-01 challenge). Get a cert via ACME for STAR.internal.medallia.com and then all internal services use that. That is potentially less secure, in that lots of internal services may all end up using the same private key. One approach is that the public wildcard cert is on a load balancer, and then that load balancer talks to internal services – end-to-end TLS can be provided by an internal CA, and you have to put the internal CA cert in the trust store of your various components, but at least you don't have the added hassle of having to put it in your internal user's browser/OS trust stores.
(In above, for STAR read an asterisk – HN wants to interpret asterisks as formatting and I don't know how to escape them.)
Which means if someone gets access to the internal network, they can read all traffic. And even dev systems can send confidential data. With letsencrypt and easy to generate certificates, https everywhere is very reasonable.
Even with VPN. I don't want any person on the vpn to be potentially able to read traffic between internal services. I think that would fail many audits.
It does though. There's no excuse for unencrypted traffic. Google doesn't have some VPN with squishy unencrypted traffic inside. Everything is just HTTPS. If they can do it, so can you. It's just not that hard to manage a PKI.
Does your organization disable the "Non-secure" prompt in the browser as well? If not, I'd say that it does seem like a security risk to train your users to ignore browser warnings like that.
It's not easily automated. Somehow, you have to safely get a certificate across the air gap to the internal network.
So I guess an internet-connected system grabs the certificates, then they get burned to DVD-R, then... a robot moves the DVD-R to the internal network? It's not easy. It's all much worse if the networks aren't physically adjacent. One could be behind a bunch of armed guards and interlocking doors.
An airgapped network can include its own internal CA, and all the airgapped clients can have that internal CA's certificate injected into their trust stores, and all the services on the airgapped network can automatically request certificates from the internal CA – which can even be done using the same protocol which Let's Encrypt uses, ACME, just running it over a private airgapped network instead of over the public Internet.
We have a ton of internal stuff, most of it doesn’t even have external DNS. We use long lived certs signed with our own CA. we’d prefer using and automated solution, using a “real” CA, but non seems to be available.
I was on the receiving end of a security audit issue. I closed the bug s won't fix, my lead approved it, but when the team who paid the security auditor found out they demanded I fix it. I had to argue with it, infosec, and the auditor. Nobody really cares what I did, they just wanted to follow the rules. After a month of weekly hour long meetings I relented and changed the code.
You're often not arguing with the auditor, you're arguing with the person who paid the security auditor in the first place who is likely not even technical. That's a battle toy will likely never win.
To add to this, often the primary goal of the person who paid the security auditor is not to actually increase security. It is to get to claim that they did their due diligence when something does happen. Any arguments with the auditor, no matter how well founded, will weaken that claim.
Depends I suppose. When your CFO tells you to fix it so you're in compliance, your opinion doesn't matter a whole lot. Never mind if it is a government auditor or their fun social counterpart the site visitor.
I once got cited for having too many off-site backups. They were all physically secure (fire proof safes or bank lock box), but the site visitor thought onsite was fine for a research program. The site visitor's home site lost all its data in a flood.
Sometimes an inexperienced auditor will show a minor finding that is a sign of a bigger issue. For example, if Windows is in FIPS mode, some MD5 functions will be disabled.
If you need to be operating in FIPS 140 mode, that may be a problem of some consequence.
And it's good! Code Reviews can't surface all issues. Independent audits should be welcomed by developers to find more bugs and potential security risks (even though I'm a bigger fan of penetration tests instead of audits).
When you're trying to keep a company of 100,000 employees secure, you can't have an approach that says "let's figure out where we need to remove MD5 and remove it." You have to set an easy to understand, consistent guideline -- "tear out the MD5" -- so that there won't be any doubt as to whether it's done, some teams won't complain that they shouldn't have to change it because some other team didn't have to change it, etc. And then every time they do a security audit the same thing will come up and cause more pointless discussion.
In isolation it looks like wasted work but in terms of organizational behavior it is actually the easiest way.
Happened to me as well. Was writing an authentication service. We thought we were paying for an actual security audit, turns out we payed for a simple word scanning of our codebase. The review didn't find any of the canaries we left in the codebase, and we could never argue back with them. Big waste of money.
Huh. I'm thinking it'd be fun to write code with know issues (with varying degrees of obviousness) and hire a bunch of different "auditing companies" to see which ones pick up on that.
Publish the result for market comparison's sake.
Then again, that requires plenty of money and I can't see how to monetize that in any way.
Not only that but MD5 still doesn't have an effective preimage attack, so it is still good enough for things like hashing passwords or to check is someone else didn't tamper with your files.
Still, when it comes to security:
- MD5 is actually too fast for hashing passwords, but there is still no better way than bruteforce if you want to crack md5-hashed-salted passwords.
- Even if there is no effective preimage attack now, it is still not a good idea to use an algorithm with known weaknesses, especially if something better is available.
What MD5 is useless for is digital signature. Anyone can produce two different documents with the same MD5.
Defense in depth, if you can grep the source code and not find any references to md5, then you have quickly verified that the code probably doesn't use md5.
This you can easily verify again later, you can even make a test for it :)
Even if in practice this had no impact, removing md5 usage, will make it harder to accidentally introduce it in the future.
The issue is not md5. The issue one wants to detect is weak hash functions used in cases where they're not appropriate. The fact that crc32 passed means that any obscure hash function would have passed too, even if it had been used in a context were it isn't appropriate.
All it means that the audit is superficial and doesn't catch the error category, just famous examples within that category. That kind of superficial sanning may be worth something when unleashed on security-naive developers or even as optional input for more experienced ones. But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.
Both perspectives are somewhat correct, I feel; the requirement to remove any usage of md5 is beneficial, but the fact that crc32 passed means the audit shows the motivation was misplaced.
The resulting situation might of course not be a net benefit though :/
> But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.
Absolutely :)
The fact is that if you have experienced engineers a security audit is rarely able to find anything. You would basically have to do code reviews, and this is hard / expensive, and even then rarely fruitful.
So, superficial scans, hardening, checking for obvious mistakes is really all you can do.
Making hard rules is unproductive, but then again, migrating from md5 to crc32 hopefully isn't very expensive.
IMO, crc32 is a better choice for testing for changes, and has the benefit of removing any doubt that the hash has any security properties.
Next up: Replace MD5 with BASE64+ROT13. Significantly worse functionality AND performance, but sounds more secure (to a layman) and doesn't trigger the "MD5" alert...
Base64 encoding does protect somewhat against "looking over your shoulder" attacks
(Unless the person looking over your shoulder has a really good memory and can remember the Base64, or decode it in their head. Or they have a camera.)
But way more people would use md5 for password hashing than crc32. Of course someone could circumvent these tests, but the risk of someone copying an old tutorial where md5 is used for password hashing can be mitigated.
I've seen similar rigidity from security audits. Stuff like "version 10.5.2 (released last week) of this software introduced a security bug that was fixed in 11.0 (released today), we need you to update from 10.5.1 (released last week + 1 day) to 11.0 now because our audit tool says so".
It seems like a thin line between a debugging feature and a backdoor; "merely a debugging server that could be enabled and allowed you to inspect internal state during runtime" seems like a backdoor to me, doubly so if it's network-accessible. If Intel has, say, an undocumented way to trigger a debug mode that lets you read memory and bypass restrictions (ex. read kernel memory from user mode, or read SGX memory), is that not a backdoor? Or is the name based on intent?
I think the difference is whether it's something that's always enabled. You could presumably make it available or not at compile time, so the software shipped to a customer wouldn't have it, but maybe if they were having issues, you could ship them a version with the debug server with their permission.
I can agree with that with the caveat that "enabled" has to be at either something that only the user can do. If it requires that the customer intentionally run a debug build, that's fine; if it can be toggled on without their knowledge, then it's a problem.
It was disabled by default, and could only be enabled using environment variables. Even when enabled, the whole thing ran in Docker and the socket was bound to loopback, so you could only connect to it from within the container.
When the intention is a debugging server, making it exposed to the world is a mistake and a security vulnerability. At that point it is effectively a backdoor, but the difference between a high level vulnerability such as this and a backdoor is developer intent.
Sure, it's simple. But you would have to be able to modify the container settings anyway. For all practical uses, and certainly in my case, you could just make it run a different image at that point. Or copy another executable into the container and run it. You're already privileged. Requiring you to be privileged to access the debug server means it's secure.
Until things around change and what was previously "a secure backdoor" becomes a "less secure backdoor". ;-)
One can read every second week about cases where some backdoor that was meant to be used "only for debugging" landed in the end product and became a security problem.
Actually I usually suspect malice when something like that is found once again, as "who the hell could be so stupid to deliver a product with a glaring backdoor". But maybe there is something to Hanlon's razor… :-D
I'm talking about the general sentiment. You can see this on every* site, HN included. The litmus paper is that even pointing out something objectively true will get criticism (downvotes) rather than critical thinking. In the current atmosphere nobody asks the question when it comes to China/Russia/NK/Iran but will when it comes to the US despite the known history of hacking/spying on everyone else.
*Recently a reputable tech site wrote an article introducing DJI (ostensibly a company needing no introduction) as "Chinese-made drone app in Google Play spooks security researchers". One day later the same author wrote an article "Hackers actively exploit high-severity networking vulnerabilities" when referring to Cisco and F5. The difference in approach is quite staggering especially considering that Cisco is known to have been involved, even unwittingly, in the NSA exploits leaked in the past.
This highlights the sentiment mentioned above: people ask the question only when they feel comfortable that the answer reinforces their opinion.
A manufacturer wanted to upgrade one of their equipment lines to be more modern. The developers of the original product, both hardware and software, were no longer with the company.
Since they just wanted to add some new features on top and present a better rack-based interface to the user, they decided to build a bigger box, put one of the old devices inside the box, then put a modern PC in there, and just link the two devices together with ethernet through an internal hub also connected to the backpanel port and call it a day.
The problem is, if you do an update, you need both the "front end" and the "back end" to coordinate their reboot. The vendor decided to fix this by adding a simple URL to the "backend" named: /backdoor/<product>Reboot?UUID=<fixed uuid>
Their sales team was not happy when I showed them an automated tool in a few lines of ruby that scans the network for backend devices and then just constantly reboots them.
They still sell this product today. We did not buy one.
They sold very expensive devices that were actually an off-the-shelf 1U PC with custom software (which provided the real value). The problem — and this dates it — was that the PCs had a game port¹, which gave away that this custom hardware was really just a regular consumer PC. So they had some fancy plastic panels made to clip on the front and hide the game port.
I remember early in my career I came across a Unisys “mainframe”, which was literally a Dell box with a custom bezel, clustered with a few other nodes with a Netgear switch.
Many non-IBM mainframe vendors switched to software emulation on more mainstream platforms-nowadays mainly Linux or Windows on x86, but in the past SPARC and Itanium were also common choices. What you saw may have been an instance of that. A software emulator can often run legacy mainframe applications much faster than the hardware they were originally written for did.
(With Unisys specifically, at one point they still made physical CPUs for high end models, but low end models were software emulation on x86; I’m not sure what they are doing right now.)
I don't know the details (~20 years ago), but pretty sure you hit the nail on the head. I think one of the boxes I saw were a hybrid -- Xeons with some sort of custom memory controller.
It was my first exposure to this sort of thing, and I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)
> I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)
Given the shrinking market share of mainframes, the only way for vendors to continue to make money is to increase prices on those customers who remain – which, of course, gives them greater encouragement to migrate away, but for some customers the migration costs are going to be so high that it is still cheaper to pay megabucks to the mainframe vendor than do that migration. With emulated systems like the ones you saw, the high costs are not really for the hardware, they are for the mainframe emulation software, mainframe operating system, etc, but it is all sold together as a package.
At least IBM mainframes have a big enough history of popularity, that there are a lot of tools out there (and entire consulting businesses) to assist with porting IBM mainframe applications to more mainstream platforms. For the remaining non-IBM mainframe platforms (Unisys, Bull, Fujitsu, etc), a lot less tools and skilled warm bodies are available, which I imagine could make these platforms more expensive to migrate away from than IBM's.
Even if this poster made it up, I'm certain it is also true at least once over, having remediated a near-identical problem from one of my employers' products at one point, and talked developers out of implementing it at least once at a different employer.
The older I get, the less I care if individual stories like this are true. The fact that they could be is concerning enough :) And they are educational nonetheless.
The time iDrac annoyed me the most is when I bricked a server trying to update it.
I made the terrible mistake of jumping too far between versions and the update broke iDrac and thus the server. There was no warning on Dell's website nor any when I applied the update. I only found out what happened after some googling where I found the upgrade path I should have taken.
This is just terrible quality control and software engineering.
At my previous employer our code was littered with references to a backdoor. It was a channel for tools running in guest operating systems to talk to the host hypervisor through a magic I/O port.
> I must warn you about those jokes. Firstly, they are translated from Russian and Hebrew by yours truly, which may cause them to lose some of their charm. Secondly, I'm not sure they came with that much charm to begin with, because my taste in jokes (or otherwise) can be politely characterized as "lowbrow". In particular, all 3 jokes are based on the sewer/plumber metaphor. I didn't consciously collect them based on this criterion, it just turns out that I can't think of a better metaphor for programming.
> Seems like an outdated term. Downvotes accepted.
Manhole is, indeed, an outdated term. Generally the preferred term is "Maintenance Hole". Still abbreviated MH, and people in the field use all three interchangeably (much like metric/imperial).
Source: I work with storm/sanitary/electrical maintenance holes.
The link is about the debate as it is, but I would also encourage the use of good faith in interpreting any speaker: that is, assuming a person referring to "mankind" likely means all humans without exclusion based on gender or sex, and requiring some other material evidence before presuming bias.
I also wonder what these discussions are like in languages where most nouns are gendered, e.g., in French.
No clue about French but in German they started to use both versions at the same time glued together in made-up "special" forms. It's like using "he/she" for every noun. This makes texts completely unreadable and you need even browser extensions[1] to not go crazy with all that gendered BS language!
OK, I exaggerate, there are still people that don't try to be "politically correct" and still use proper language, and know that there is such a thing called "Generisches Maskulinum (English: generic masculine)"[2]. But in more "official" writings or in the media the brain dead double-forms are used up until the point you can't read such texts any more: Those double-forms (which are not correct German) cause constant knots in the head when trying to read a text that was fucked up this way.
(Sorry for the strong words but one just can't formulate it differently. As the existence of that browser extensions shows clearly I'm not alone when it comes to going mad about that rape of language. Also often whole comment sections don't discuss a topic at hand but instead most people complain about the usage of broken "gendered" pseudo-politically-correct BS language. That noun-gendering is like a disease!)
Believe it or not, we introduced a variant of bash brace expansion (except with implicit braces and dots instead of commas) in our grammar, named it “écriture inclusive”, and called it a day.
The way it kicks words previously loaded with neutrality in the curb but happened to have the same spelling as the gendered one, and entrenches a two-gender paradigm boggles the mind as to how it flies in the face of any form of inclusivity.
That and I still don’t know how to read “le.a fermi.er.ère” aloud. It’s just as ridiculous as “cédérom” because Astérix puts up a show at standing against the invader.
> In practice, grammatical gender exhibits a systematic structural bias that has made masculine forms the default for generic, non-gender-specific contexts.
many instances of this are simply an artifact of 'man' previously being an un-gendered term. but that fact is much harder to build group cohesion around than grievance.
I have learned that flat out telling people that a hill isn't worth dying on tends to cause a bunch of corpses to collect up - if you don't want a molehill covered in bodies you need to persuade them to go die somewhere else.
Yeah agree. And I think we could agree replying "Ew" and loosing a little bit of HN karma does not constitute more than bruising.
EDIT: didn't see the "or even" there. Disagree. I think the analogy can be drawn out a bit, so I'll say that a bruise can heal pretty quick, and one would adapt better to climbing "hills" if they exercised regularly. Plus maybe smaller hills should be climbed too.
I'm really not at all interested in people explaining to me how finding mentions of back doors in technology used in millions of computers is probably OK because it may mean something else.
Given US security apparatus clearly values and desire these back doors and have the necessary power to coerce companies to making them, generalizing the use of "back door" as a term for debugging or w/e seems almost expected.
Even if they are for debugging "oops it's on in production!" is a great cover because none of these companies will EVER admit back doors were required by the government.
I worked at a place where IT had an admin user on every machine named "Backdoor". I opened a ticket when I noticed it, which was promptly closed explaining that it was normal.
The same place had a boot script on every computer that wrote to a network-mounted file. Everyone had read permissions to it (and probably write, but I didn't test) and the file contained user names, machine names, and date-times of every login after boot for everyone on the domain going back 5 years. I opened a ticket for that, which was never addressed.
indeed it literally was the author's suggestion to search for the word 'backdoor':
>This code, to us, appears to involve the handling of memory error detection and correction rather than a "backdoor" in the security sense. The IOH SR 17 probably refers to scratchpad register 17 in the I/O hub, part of Intel's chipsets, that is used by firmware code.
Judging from the current, in all likelihood it is the opcode that APEI (a part of ACPI) tables write to port 0xB2 in order to invoke firmware services that run in system management mode.
>merely a debugging server that could be enabled and allowed you to inspect internal state during runtime
When we talk about CPU it's bad enough. Think that your program has an input and output streams where most of the app data goes through and I can attach debugger and listen on the data.
I would not be very happy about it and would still consider it backdoor.