amazon's response on this has been absolutely disgusting. we pay for their enterprise support, and that team is completely unaware. last night they advised that if i wanted this fixed in a day i should consider terminating my own ssl. i set that up, and as soon as i was about to cut over, i noticed that our elbs were fixed.
as of now, support is unaware there is a fix being rolled out.
i would have been better served not speaking to them, let alone paying for aws support.
We've had the exact opposite experience: After reaching out to Amazon our ELBs were prioritized for the fix and patched with no work needed on our part within <8 hours. Our rep has been in constant contact the whole time.
I've had really mixed experiences with support from them, ranging from this to awesome. Mostly it's been ok - except for their more legacy products. Sucks you wasted your time, but I'm glad it's fixed!
While stock service support is good (EC2, S3, etc), we've had terrible support with Elastic Transcoder. 1 week+ on an issue that completely breaks HLS video encoding. We switched to encoding.com because I get to talk to a human being, over the phone, and aren't put through a support forum.
Our servers in us-west-1 are still showing as vulnerable sometimes. Even with that 99 designs link, if I refresh sometimes if passes and sometimes it fails. Possibly the patch is still being rolled out?
Beware that that tool is often returning that your domain is not vulnerable when really it is. Appears to be a load issue. I'm not confident that any ELBs are patched yet.
According cloudflare blog:
Today a new vulnerability was announced in OpenSSL 1.0.1 that allows an attacker to reveal up to 64kB of memory to a connected client or server (CVE-2014-0160). We fixed this vulnerability last week before it was made public. All sites that use CloudFlare for SSL have received this fix and are automatically protected.
In a same manner CloudFlare had it before the disclosure, OpenSSL team should've contacted major GNU distro (Debian, Fedora, Arch) packagers privately and do the announcement as new releases hit the repos (i.e. not having a 4-8 hour window, given the bug's pretty much critical).
Nope; package maintainers said they didn't get notified, and OpenSSL explicitly has no notification mechanism for such things. CF found out because the private entities which found the bug warned them a priori with a request to not disclose it to anyone else. See also: https://news.ycombinator.com/item?id=7549986
Just used Zapier to set up a RSS-to-email trigger to get notified about things like this in the future, although Amazon really should be sending them out automatically to customers.
AWS's ELB (which we use) were vulnerable, we'll be replacing certificates ASAP. We (and most the rest of the internet using ELB) seem to be in the clear now:
./heartbleeder zapier.com
SECURE - zapier.com:443 has the heartbeat extension enabled, but timed out after a malformed heartbeat (this likely means that it is not vulnerable)
When did you run your check? Do you have a recent binary of heartbleeder?
CloudFlare will not terminate SSL for free users - only for the paid plans. I will be also surprised if they really terminate more SSL connections than AWS ELB & CloudFront. I am not saying CloudFlare did anything wrong - they are doing great this time. I am wondering if other parties like AWS received the disclosure at the same time, what they are waiting? CloudFlare fixed the issue last week, not yesterday.
Are you asking why CloudFlare published their (our) blog post when we did? We did because the OpenSSL project had issued their advisory on the vulnerability already.
I personally observed the OpenSSL disclosure on this on the New page of Hacker News prior to our blog post being made live (and prior to submitting it to HN). You can also verify this yourself by following the New page backwards.
Given that the OpenSSL team has slacked massively on setting up an advanced notification list, would you consider notifying distributions privately the next time you received word of something of this magnitude?
True, although if nothing a discrete “You want full staffing ready for something bad next week” would have been polite.
The bigger question, though, is how much trust you can have with the kind of people who would zero-day most of the internet for a marketing exercise. Discrete notifications would have closed the vulnerability window for a lot of people (think e.g. stealthy AWS, Rackspace, etc. upgrades) and it's not clear to me that Codenomicon is likely to produce future tips of such value as to outweigh that.
This comment is better than 99% of the media coverage I've seen so far. Who announces a crucial SSL vulnerability that affects Twitter, AWS, Steam, Yahoo and Dropbox without notifying them first? They are making a name for themselves by exposing private information of millions of internet users.
Also, I find it interesting that the vulnerability was discovered by Google's researcher and some of their main competitors weren't notified about it.
There are new tools[1] being built around Docker that make it viable to easily move your apps out of Heroku and onto any system running Docker including DigitalOcean, GCE, AWS, etc.
When you combine Docker, buildpacks, and CoreOS[2], you get a scalable and flexible platform that you can run anywhere. It has taken people a long time to combine the simplicity of Heroku with the flexibility of bare metal, but the open source guys have finally put all the building blocks together.
Another tool that does just this would be Deis[1]. We specifically combine CoreOS (in heavy development[2]), Docker, Heroku Buildpacks, and also Dockerfile deployments[3], too!
Oh, we automate that end of things, our cloud is all chef managed... but getting 200 certs re-issued by a dozen different issuers with new private keys is, how you say, tedious.
as of now, support is unaware there is a fix being rolled out.
i would have been better served not speaking to them, let alone paying for aws support.