> Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?
And Russia, and Iran, and so on... It seems safe to assume someone else out there found at least one of these and got in to the Apple internal network and has been quietly doing their job, whatever it may be.
"Our proof of concept for this report was demonstrating we could read and access Apple’s internal maven repository which contained the source code for what appeared to be hundreds of different applications, iOS, and macOS."
This itself is massive. How many 0-days could emerge from something like that?!
I work on a giant famous multi-billion dollar company where all the internal stuff is full of permission requirements, training requirements, etc. It is absolutely HORRIBLE to be productive here. Every single kind of information you need to be able to work is hidden behind someone's wall. I often lose entire weeks of productivity just trying to find who owns a certain information or knows which permission I need to request in order to read a link. There was a time I literally had to wait a whole month before the person was on vacation and their manager didn't know how to authorize me into the system. All I needed was a binary file they provided.
Even worse: every team thinks the thing they do is absolutely the most important thing in the world so they hide it even more. They create empires around the information they control and explicitly force you out of it. So instead of just reading their freaking source code or documentation you have to get permission to open a ticket in their system, then you open it, then one person will triage your ticket, another will forward it, another will create an internal Jira about them, a PM will prioritize it, then a dev will gather the information and pass to the Senior Information Proxy employee who will instruct the intern to finally reply it in your ticket. And of course your original message was misunderstood so the thing they gave you is useless. All you needed was access to the damn thing, but they built an empire around it and now you have to fight a war of improductivity.
To add insult to injury, your account of the state of things gives me no reason to think that their internal systems aren't rife with similar vulnerabilities, so rather like DRM only making life hard for paying customers, I suspect that these measures only make access difficult for honest employees.
The point is, at a certain scale you _are_ unable to secure your perimeter. Are you surprised that a handful of likely thousands external facing application can be hacked?
Especially, if most of your colleagues never have to bother with security, because they think, they are safe behind the perimeter, how can you expect a secure perimeter? With so many applications, there is bound to be one to have a hole.
The argument is more on the meta-level. Most of the shown ones are implementation issues. Hundreds of people have their hands in here.
But being able to gain more privileges because you have managed to compromise a service, that is one of design. And here, only few should have a say in.
That's certainly one view of things. The other view is taken by the beyondcorp/zero-trust model. But the lesson I take from this article (and my own experience) is that if you allow commercial off-the-shelf and open-source software into your network the end result will always be an insecure mess. If you absolutely must adopt off-the-shelf software the only safe way to do it is to put a proxy in front of it that's completely integrated with your authn/authz systems such that the native protocol of the third-party system is completely hidden and inaccessible.
The Google model is frequently derided on HN as "not invented here" but at least you can say that they aren't getting rooted via some kind of toxic waste like Jive forums.
If I understand correctly, Google’s model is to basically roll their own authentication frontend to any service they run. Now, this is likely better than what some off-the-shelf open source library might be using (which might actually have been fine if you had configured it correctly) and I have nothing against running further authentication before giving access to your things, but calling this the “only safe way” to do something is not really true at all. There’s a number of companies that run without this model that do fairly well, and Google endpoints are occasionally are hit by researchers. So it’s good on Google that they have a policy up for this and it mostly seems to work for them, but it’s not the only solution like you’re suggesting.
I think the main lesson is just to not tolerate third-party protocols. Having a uniform RPC interface with integrated authentication, authorization, and delegation makes it much easier to get your security situation under control. If you're out there with your MongoDB password in a secrets vault, you're already in an unsustainable situation.
And Russia, and Iran, and so on... It seems safe to assume someone else out there found at least one of these and got in to the Apple internal network and has been quietly doing their job, whatever it may be.