Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The other way to see it, is that it took them 8 days to notice a full compromise of the hosting OS and an open access to Google’s internal docker image repository URL.


I'm going to guess that this VM was considered the 'customers' VM as far as security goes... Ie. you couldn't access any other customers data.

Likewise, GCP Dataflow quite trivially allows you to escape onto the worker machines and take the (huge) binaries that implement it. They have some really nice detailed status pages!


I was part of GCP Cloud Dataflow team a few years ago. The status page is actually the standard for all google internal services (/statusz). I still miss them much.

In dataflow's case, container is not treated as the boundary. And there are several important things to note:

- Dataflow's VMs are in customer projects, so there's no risk of cross-tenant access.

- When launching dataflow jobs, the launcher identity is checked to have iam.serviceAccountUser IAM role, which means that the identity should be able to launch a VM with the same service account just fine. So dataflow is not escalating the permission beyond GCE VMs.

- Just as VM launched by someone, if anyone else can log onto those VMs are controlled separately.

- Container is used in dataflow only for convenient image delivery, not for a security barrier. VM is.


Containers should never be treated as a security boundary.


Yes. They don't want you to be able to poke around but the real security boundary is the VM, not the database server.


Back when there was a critical Azure bug that enabled an Azure user to gain access to top-level keys (i.e. the keys to the entire kingdom), a Google engineer commented on an HN thread that Google specifically didn't consider container boundaries secure, so everything is always tied to a VM specific to a customer. The issue with Azure is that a container escape allowed a user to take over the entire Azure subsystem.


It's not a mistake unique to Azure, Alibaba had a vulnerability make the news rounds recently where container escapes led to cross tenant access.

There's two types of cloud providers, the ones who take security seriously and the ones who learn security the hard, public way.

I'm a bit surprised that Azure would get lumped in with the other cut-rate providers but that's becoming more and more obvious with the vulnerabilities of the past few years.


Not sure if this is still true RE: Azure. AFAIK they use Hyper-V (hypervisor) containers which offer kernel isolation like other lightweight-VM-container runtimes.


Hyper-V has been around for a while, did Azure just not get the memo until recently?


I work for Google but mostly interface with GCP the same way everyone else does.

The vms are somewhat hidden in the UI iirc but otherwise you can enumerate them via API and ssh to them and debug/profile (which I was doing to get cross-language profiling on data flow pipelines with py-spy and jvm perf output).

It's just a worker vm in your project.


The hosting OS is all but certain to be virtualized. It's no different from customers creating a GCE VM in the first place.


It took 8 days to proactively reach out. It may very well have been identified earlier and then taken some time to be passed off to Google's vulnerability reward program and get any approvals necessary


To start getting info from the team, as nothing indicates that at that time, Google knew where the vulnerability was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: