Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You cannot leave your site open to a pre-auth remote code execution vulnerability while you wait for fixes to the asset pipeline or to any other component of Rails.

I don't know that that's what you're saying you did but we need to be glacier-blue-ice-clear about this. Nobody gets to wait on bugs like this. You patch or workaround immediately or, most probably, you shut your app down.



It slowed down the update (to 3.2.11), but it didn't prevent us from removing the XML parser from DEFAULT_PARSERS immediately. I'm not defending anyone here. I'm simply pointing out that the scenario wasn't exactly status quo. This has all moved very quickly.

EDIT: I guess that qualifies as a mitigation strategy, but when I said that, I was talking more along the lines of the patches, or like another person I know, even more dramatic steps like forking Rails. There are regressions in the 3.2.x updates since 3.2.9 that affect some sites.

Bottom line is that there was a lot of bad timing here that sucked up a lot of time in securing a Rails site.


Right on. That was the right call to make.

I wanted to be careful not to point a finger at you; it's just that this is exactly the kind of crazy mistake I can see a web startup making.

Obviously, the timing sucked, but nobody had any control over that.


Also, lest anyone think that they have time, there appear to be scanners up using the Metasploit code. This request came from an IP address on Road Runner's netblock from Ohio. The request hit a server that we don't publish, and the IP doesn't match any other requests in our logs. The greatest likelihood is that it is a computer locked up in a botnet.

https://gist.github.com/4512579

http://dev.metasploit.com/redmine/projects/framework/reposit...


You also cannot publish little known pre-auth remote code execution vulnerabilities for your web-framework without first publishing a mitigation patch that doesn't paint a BIG FAT RED ARROW onto the attack vector.

You also cannot leave 6 years of vulnerable Rails-versions up on rubygems.org without even backporting your patch (yes, they're still up there now).

It makes no sense to blame the users of a web-framework, most of which are not security experts, for not responding instantly with the perfectly correct procedure to an event of this magnitude.

Put the blame where it belongs, on the idiots who "handled" the issue by releasing a "recipe to exploit"-Advisory. The same idiots who ramble about "not breaking old apps" when asked why the vulnerable gems are still up on rubygems.org and continue to be quietly installed by any Gemfile referencing them...


I sympathise with you but (a) none of this matters in context; you still have to fix immediately or pull the plug and (b) you are deluding yourself if you think a flaw that 5 teams found independently the day the last SQLI advisory was released wasn't going to be in Metasploit within 7 days. This bug was too damn blatant.

I don't know why people weren't taking a hard look at the params processing code path in Rails for 6 years, but they clearly weren't. The SQLI bug from last week pointed people at that code, it got its first real shake, and that's all there is to say about it.


As a fan of responsible disclosure, the sad state of the universe is that telling people "upgrade to X now" on an open-source project makes it very easy for anyone to diff to find out what the fix is, which makes it very easy for people to figure out how to exploit it.


No. A thousand times.

A bit more creativity should be allowed when handling a vulnerability of this magnitude.

For example, publish a patch that escapes all input in curious ways, presumably to prevent SQL injection. Pad it, obfuscate the actual fix with code-noise, make it annoying to read. Then release it as some handwavy, semi-plausible "follow-up" to the previous SQL-injection, urging everyone to upgrade in small caps.

Yes, a handful of blackhats will see through the bluff. However, it will likely be the exact same blackhats that already had discovered the issue after the initial SQL injection advisory.

This would have given the Rails-community a head-start, rather than unleashing every script-kiddy on the planet at once.


I was actually pretty upset because I thought that might have been what the Rails team was trying to do --- to come up with an occlusive patch, like you're advocating. Because that plan was. not. going. to. work. When multiple teams flag the same RCE on the same day, the only reasonable conclusion to come to is that the cat is out of the bag.

What needed to happen is what happened: a patch and a series of workarounds were produced as soon as possible.

Nobody got to choreograph this one. The control you think the Rails team had over this, they did not have.


This is terrible. The OpenBSD/SSH team tried this once, with the channel bug. They released the privsep version of ssh which didn't fix but mitigated the bug, and told everybody there was a critical bug and you really, really wanted to upgrade. What happened next? Everybody from Alan Cox on down started complaining about how they weren't going to dance to somebody else's tune and they were going to wait to see the real fix, thank you very much.


Now that's an interesting reference!

I actually remember this incident (albeit blurredly, hell, was that really 10 years ago...).

I dug up a few snippets[1] and I think the main sources of the animosities back then were certain regressions caused by privsep (not a hassle-free change) and a bit of ego-clashing (Alan Cox, Theo).

Your summary of the events may be about right, but is "not wanting to dance to someone else's tune" really a good argument against an attempt at responsible disclosure?

[1] http://www.baylisa.org/pipermail/baylisa/2002-June.txt (at the bottom)

[1] http://seclists.org/bugtraq/2002/Jun/300


So you're advocating releasing a "decoy" patch that's intentionally obtuse and doesn't actually fix the issue?

And then downplaying the severity by not encouraging developers to take the follow-up patch seriously?

That's a flat out terrible idea.


No. I was advocating to release a patch that fixes the issue in an obfuscated, non-obvious way. And labeling it as a boring-yet-important follow-up to the previous SQL-injection vulnerability, rather than yelling "LOOK, REMOTE CODE INJECTION HERE -->.<-- !!" on all available news-channels.


Would you invest time in a framework where the developers knowingly lie about security vulnerabilities and provide fixes that are intentionally complicated?

That seems like something that would ruin any future credibility of the project. No further security vulnerabilities could be trusted as being accurate (and any further patches would just be begging for additional scrutiny).


That "concern" is absurd. You're making no sense.


If a Linux distro released a kernel patch for a super-critical security vulnerability that lied about its nature, AND downplayed its importance, users would go apeshit (justifiably).


You're still making no sense. Various large projects, including the kernel, have seen silent security patches (yes, even by Linus himself).

Also there is no "downplaying" in declaring any kind of problem as a remote SQL injection vulnerability. That is still obviously urgent enough for everyone to patch immediately. Yet it doesn't attract blackhats in the same way as blarting "remote code injection".


Mis-labeling a Remote Code Execution vulnerability as SQL-I is absolutely downplaying the severity. SQL-I is a bad finding, RCE is tantamount to the worst thing you could possibly find in an application.

There were people on this very site who were commenting about whether they should concern themselves with this patch (initially because people erroneously attributed the vuln to SQL-I, and then later because they "weren't using XML anyway")

Those kinds of things happen when you don't clearly describe what a vulnerability is, and when you try to mask how big of a deal something is.

This was a huge vulnerability. It was critically important that everyone running a Rails app fix it immediately. Shouting that from the rooftops was absolutely the right approach. Cloak and Dagger bullshit to try and hide that is unequivocally a bad idea.


Well, the votes are strongly in your favor, but I remain unconvinced - accepting that I'm in the minority here.

For me this very article (bitcoin exchange being hacked) demonstrates that despite the harsh wording, the advisory didn't reach everyone in time. Not even those who should really care (like bitcoin exchanges).

I still think the explicit disclosure did more harm than good, by drawing maximum attention from the blackhat-community without really improving the reach amongst the oblivious. I still think a "staged" disclosure might have worked better, even at the risk of the timeline being short-circuited by a malicious party spilling the beans early.

However, there's little point bemoaning this particular baby or bathwater much more now.

I'm more interested in the steps that the Rails-team will be taking to lessen the blow by future security incidents. As I've said in another thread I'd be in favor of an optional (opt-in) kill-switch, to be triggered only in drastic cases like this one. Perhaps that is a point where we can agree again - otherwise we'll have to part in disagreement. :)


I don't understand how the Rails team could plausibly have slowed down the disclosure. The exploit wasn't proprietary information, and it was easy to find.


I don't want to drag this out further (all said and voted I think), so I'll only briefly refer to my idea of an "obscured" patch. Not pretty by any metric and definitely not a template for future incidents. But I still think it could have stretched the timespan a bit between the discovery by security-researchers/rails-hackers - and that afternoon when our Rails-intern (beginner ruby frontend coder) proudly showed us how he reproduced it in his rails console...


In fairness, while I'm specifically not a fan of that Phusion post, it was referring to a bug that wasn't an RCE.


Right, the Phusion post was referring to the previous bug though, right?

I disagree with Moe that labeling the latest bug as an SQL-I instead of RCE is a good strategy to ward off blackhats.


How much time do you think that would have bought? At least a few minutes, but definitely not days. And at what cost would it be?

I'm all in favor of giving people more time to upgrade, but this issue was very unusual. Multiple groups were all finding it at the same time and who knows when the first "full disclosure is always the right answer" group would have started singing? Like it or not, full-disclosure people exist in the world, and responsible-disclosure people need to be aware of their existence.

If just one person/group had found this, RoR could have done a better "get ready to upgrade next Tuesday at 6am" followed by a "here is the patch, details to follow tomorrow" on Tuesday at 6am.


I was actually pretty pissed that it wasn't explicitly labeled as a "remote code execution" vulnerability in the CVE post title. This is open source, you don't get to hide. The code is there. Trying to be clever about releasing a patch isn't going to help, because every patch gets looked at by a large community of security people. Some of those people are the good guys, but some are the bad guys.

Nothing about this situation was good, but hiding it would have only made it worse. When you're serving shit sandwiches for lunch, it's best to let everyone know that's what's on the menu. End of story.


Weren't there patches for rails, and release for rails with the patches, really quickly from release?


Yes, along with workarounds. I'm reacting to the inference that someone might wait to apply the patches because they broke the Asset Pipeline somehow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: