That's pretty crazy, though with the hack publicly disclosed, it was bound to be exploited. What I don't understand is why this report incident comes so late. The hack has been public for a large amount of time. Could anyone explain?
The linked issue isn't "we need to patch Shellshock". It's "we have 800 hacked machines on the network, they are DDoSing like crazy, and it's saturating the internal network". Their problem isn't that servers have been hacked. It's that it's overloading the network.
OVH isn't responsible for what people do with the servers. They provide the initial installation, networking, hardware monitoring and some management tools, but that's it. If they detect that a server is sending too much traffic, they can guess that it has been hacked, so what they usually do is disable it and notify the owner to fix it. But they don't do more, that's not their business. If you want a host that does more for you, it's managed hosting you want, not a dedicated server provider.
Actually if they detect a server is compromised, they will put your server in a FTP only data recovery mode and usually the only way to get your server back is to do a fresh install of the OS. Luckily OS reloads are fast and provisioning dedicated servers takes minutes.
Yeah their anti-DDOS is actually to null route everything to your server. It's been a long time since I've seen that but that how they did it in the past (which is actually making any ddos against you really effective but your neighbors won't be as affected).
If it's actually your server that actually attack another server, they will shutdown your server and give you a warning. They will let you boot in their recovery os that let you access your file system but if your server does it again, they terminate your account.
Like I said, maybe it's different from the last time I saw an attack on an OVH server, however when I saw it, it was literally impossible to reach the server even though it was still up. Using their ip failover system was the only way.
Their anti-DDOS system is mostly designed to protect against external attacks. It works at the network level, probably at the connection between their network and the outside world. Because that's the most efficient way: detect them and block them where you have the most bandwidth available.
This is an internal attack, which requires different mitigation measures, and is seen less often in the wild (compromising 500 servers from a specific provider is more difficult than 500 random servers on the internet, and you're pretty much guaranteed that the provider will deactivate most of them after the first attack), so I guess their protection systems aren't as developped against it.
Its on network/hardware level. The issues with internal DDOS is that they are software related. OVH cannot access your server (or it shouldn't) so the only choice they have is to shut the server and wait till you will fix it.
But if you receive DDOS attack to your server from outside, they can defend you using network resources.
I manage 2 VPSs. I'm responsible for security patches and everything. If shit hits the fan, it's because of me not because of my VPS provider :-) That's what VPS is all about.
True. However, since most clients (especially those that don't maintain their servers well) use standard images provided by the hosting provider, I don't understand why those images don't come with automatic security updates configured by default.
I know that many of us that run systems for which availability criteria mean they want full manual control over all updates, but those are the people that either use custom images or know how to turn those automatic updates off.
Security updates are finally pushed hard on desktop users for obvious reasons, with everyone and their mother running cheap VPS's these days the same logic should apply to virtual servers.
Automatic updates for server software can be tricky.
Ubuntu, for example, can be configured to run apt-get update && apt-get upgrade automatically every day (using unattended-upgrades), but AFAIK the default configuration only checks for updates, notifies the admin, and waits for approval before actually changing anything.
Even though Ubuntu LTS, Debian Stable, CentOS, etc. are supposed to be stable distributions, it is not unheard of for routine updates to break something mission-critical. For example, the package maintainer might have made some changes to the default configuration. (C'mon, do you really need to make minor changes to nginx.conf all the time?) A daemon might go down and fail to come up again after an update. (apt is particularly annoying in this regard, since it forcibly restarts daemons during an upgrade, and not always in a sensible order. Result: HTTP 502 Bad Gateway.) So unless a human is present to spot any issues immediately after an update, you might be left with a broken system at an inconvenient hour. Which is why the documentation doesn't recommend fully automated updates.
Unfortunately, the set of VPS owners who don't apply updates regularly probably has a large intersection with the set of VPS owners who won't be able to troubleshoot a broken update anyway, so maybe this concern need not apply...
We're working on the auto-update model for servers at CoreOS. If interested, you can read more about our update philosophy here: https://coreos.com/using-coreos/updates/. We think that this is one of the best ways to secure the backend internet, similar to how auto-updating browsers moved the front-end of the internet forward.
This can be difficult as some updates can break applications. PHP is problematic that way for our customers on servers that we manage for them. Its not as bad as it used to be (e.g. register_globals), but updates can break customer apps. So if you automatically apply them and their web server breaks, who does the clean up ? The web developer who got paid to create the site but not maintain it ? The VPS provider who gets paid to just host the image ? The customer from their POV (and these are not technical people) will often say, "but I didnt do anything to break it. You broke it! You fix it!" Also, as there are (typically) no regression testing, breakage can be silent and undetected for some time.
As I understand it, server updates run counter to the idea of servers as static, externally configured fixtures ala cloud formation or its competitors. To merge the concepts you'd need to have aws or whoever manage your cloud formation templates and automatically re-deploy whenever they make a change to them for security reasons... I think maybe our tooling isn't quite there yet, but I'm sure its coming. Disclaimer: I don't have a very deep knowledge of cloudformation, so I might be talking out of my ass.
I have some vps with digitalocean, I haven't logged in to them in a long time, even after shellshock surfaced publicly. about aftter a month later I checked to see if I am vulnerable through shellshocker.net test and it was negative. I am guessing my provider did something to fix it.
I know they dont, I didn't say they do. But my non up-to-date Ubuntu server wasn't vulnerable when I checked about a month after shellshock vulnerable went public.
Have you considered you may be using a software stack that is not (obviously) vulnerable? Not every software is vulnerable by default on all URLs, just the ones that wind up calling out to bash in a specific way.
If I am not wrong the vulnerability test on shellshocker.net tests bash directly. Either way I could very well be wrong and I might have missed something.
Yes that's the one I was talking about. Ran the test before and after updating. It was the same (negative). I ran the same test on the relative up todate home computer (Ubuntu) and it was vulnerable on some of the tests. This IS weird.
So somebody used shellshock to break in, then plugged the hole to prevent others from accessing the server, and is now using your server for... whatever?
Ubuntu, for instance, has unattended-upgrade (that it defaults to on), the premise being that it will forcefully install and use critical security patches. Ideally you'd have done this yourself ASAP, but if not I have to think that unattended-upgrade came into play at some point.
Default means "the default when you install Ubuntu", not the unconfigured state of that process. For kernel and in-use things it would still require a restart, however maybe at some point DO forced a restart, even if unrelated.
Probably because they are a virtual and colo hoster so aren't generally responsible for the security of the machines they host. Also, a big chunk of people who buy these machines are less responsible than they should be.
At best this is monitoring and playing whack-a-mole on their part which they've done a pretty good job of by the looks.
The involved machines are dedicated servers, according to the posts.
Also, although I have many things to blame OVH for, they at least take a somewhat proactive stance regarding security, even for the very cheap dedicated servers. They usually contact you when they notice a traffic spike on unusual ports for example.
But yeah... judging by the people I know in real life who own cheap OVH dedicated servers, I'm not surprised at all by these news.
that's why I find crazy all those people wanting to have a dedicated server, I don't want to spend my life on call 24/7.
I'm pretty sure 90% of the customers are not professionally trained in server maintenance. I am, and I don't want to do it anyways unless I have a good reason to do it.
I use a VPS for email (configured opensmtpd + IMAP over SSL), VPN server, HTTP Proxy and host 3-4 personal applications (nginx + mysql + ruby apps).
The hard part was to set-up all these services, keeping them up-to-date, since no other party depends on them it is considerably easy and almost risk-free. Other than that is not time-consuming at all. Only 2 times in 3 years I had to re-configure a service. Most of the times I just run an upgrade-script on a tmux session and everything runs smoothly (I rarely use pre-compiled packages).
Now, if I had a production-level web application, I would probably set-up as carefully as I could a new VPS and roll my application there, up to the point where I needed to scale.
If it would be a one-man-show I'd probably go with Heroku (since I write sinatra/ruby/rails applications) or similar (possible cheaper) service to avoid spending time on sys-admin. But these service IMHO are still, expensive for projects with no income while a VPS can do all that and mode at once at a considerably lower price.
ps. As a side note. Before I join HN, I though that all programmers were capable of sys-admin (UNIX servers). Then I realized that tasks that seem trivial to me, are considered somewhat difficult for others and vice-versa of course.
Agree entirely. That's why I go half-way and use Windows Azure's PaaS platform. Much less painful.
The company I currently work for have to full racks in two data centres each, most of which is near idle and it's a full time job for 2 people. Could do away with all the machines and all the associated administrative staff, just use a devops team and shave 40% of capex and opex as well by moving to something else.
But then again, they might not be getting nice lunches with HP and the DC vendor.
Depending upon how much they're spending, I kind of doubt HP would really pay a whole lot for trying to woo that customer. The usual method is to just cut pricing if someone gets antsy in TSG and over on the Procurve / networking side. This is a company that nickel and dimed people on trips to force them to exclude alcohol from reimbursement and had probably the lowest per diem I've ever had while traveling.
To easily maintain ratio (of uploaded to downloaded amounts of data) on a private torrent tracker. Due to restricted access (members only), it's pretty safe.
If you're asking about the hosting company, they largely don't care what you do unless someone complains.
Would you think most of torrents are private? So you need to be introduced to be part of them? So if you're introduced, a police investigator could be introduced too, right?
Because you're trained you realize the actual costs, most people don't factor in maintenance as cost and see VPS's being as no-brainer cheap to get stuff running on.