It seems like a thin line between a debugging feature and a backdoor; "merely a debugging server that could be enabled and allowed you to inspect internal state during runtime" seems like a backdoor to me, doubly so if it's network-accessible. If Intel has, say, an undocumented way to trigger a debug mode that lets you read memory and bypass restrictions (ex. read kernel memory from user mode, or read SGX memory), is that not a backdoor? Or is the name based on intent?
I think the difference is whether it's something that's always enabled. You could presumably make it available or not at compile time, so the software shipped to a customer wouldn't have it, but maybe if they were having issues, you could ship them a version with the debug server with their permission.
I can agree with that with the caveat that "enabled" has to be at either something that only the user can do. If it requires that the customer intentionally run a debug build, that's fine; if it can be toggled on without their knowledge, then it's a problem.
It was disabled by default, and could only be enabled using environment variables. Even when enabled, the whole thing ran in Docker and the socket was bound to loopback, so you could only connect to it from within the container.
When the intention is a debugging server, making it exposed to the world is a mistake and a security vulnerability. At that point it is effectively a backdoor, but the difference between a high level vulnerability such as this and a backdoor is developer intent.
Sure, it's simple. But you would have to be able to modify the container settings anyway. For all practical uses, and certainly in my case, you could just make it run a different image at that point. Or copy another executable into the container and run it. You're already privileged. Requiring you to be privileged to access the debug server means it's secure.
Until things around change and what was previously "a secure backdoor" becomes a "less secure backdoor". ;-)
One can read every second week about cases where some backdoor that was meant to be used "only for debugging" landed in the end product and became a security problem.
Actually I usually suspect malice when something like that is found once again, as "who the hell could be so stupid to deliver a product with a glaring backdoor". But maybe there is something to Hanlon's razor… :-D
I'm talking about the general sentiment. You can see this on every* site, HN included. The litmus paper is that even pointing out something objectively true will get criticism (downvotes) rather than critical thinking. In the current atmosphere nobody asks the question when it comes to China/Russia/NK/Iran but will when it comes to the US despite the known history of hacking/spying on everyone else.
*Recently a reputable tech site wrote an article introducing DJI (ostensibly a company needing no introduction) as "Chinese-made drone app in Google Play spooks security researchers". One day later the same author wrote an article "Hackers actively exploit high-severity networking vulnerabilities" when referring to Cisco and F5. The difference in approach is quite staggering especially considering that Cisco is known to have been involved, even unwittingly, in the NSA exploits leaked in the past.
This highlights the sentiment mentioned above: people ask the question only when they feel comfortable that the answer reinforces their opinion.