It's not as clear cut as the article describes though. For low usage stuff serverless is cheaper. For "spiky" traffic - particularly when those spikes are unpredictable - serverless is often cheaper and sometimes faster too (eg if your spikes ramp up quicker than the spin up times of virtual machines).
Also AOT compiled languages like C# will run slower on lambda than pre-compiled on an EC2. You shouldn't need benchmarks to work that much out - it's software development 101. However you then went on to compare JIT compiled languages on lambda vs EC2 you'd notice the performance is much closer to each other.
You say tech should be about the results rather than religion and while I do agree there is a lot of cult-like behaviours in IT (and there always has been - emacs/vi, GNOME/KDE/etc, Windows/Linux/MacOS, etc) organisations that really depend on their services genuinely do load test their stuff to hell and back to ensure they are getting the best price, performance, scalability (where required) and resilience too.
That last point is also important when looking at infrastructure. Serverless often (though not always - you need to look at it on a case by case basis) offer you better resilience than rolling your own with EC2 instances. I mean sure, you can span multiple AZs yadda yadda yadda - it's all doable. But again you have to factor in spin up times and spare capacity (eg if you lose an AZ and then have 100% of your traffic over 2/3s of your existing hosts, are they going to have enough spare capacity to survive the spin up time of additional hosts or are you going to lose everything like toppling dominoes?)
Ultimately though, if you predictable traffic and don't have a requirements for redundancy then you could just host it yourself for a fraction of the cost of running stuff in the cloud. The real technical challenge isn't hosting though, it's all the edge cases around what needs to happen when your typical hosting plan isn't enough.
C# is compiled to an intermediate language (MSIL/CIL^) which is then JIT-ed.
The JIT cost actually adds a fair bit to the start-up time for C# code. This is why on the webserver there's an option to keep the C# code resident so it's always ready to go. In the past - before this option - it was common have scheduled jobs which pinged your webapp regularly to keep it hot :)
This to me smells like "things I shouldn't have to care about when using a serverless architecture". I would expect the serverless provider to keep my code fully AOT compiled in the fastest cold-start-format instead (wasn't ".net native" a thing?) of JITing things at the last second.
Unfortunately .Net Native doesn't support a lot of project types at the moment.
Serverless is all built around fully automated and managed containers, right? So it abstracts the whole runtime environment for webserver code. You use it if you don't have the skills/capability to run your own deployment/scaling pipeline.
The technology is a great 80/20 style leveler; I would highly recommend it to new developers and to people coming from a less technical background (such as design, marketing etc).
> Also AOT compiled languages like C# will run slower on lambda than pre-compiled on an EC2. You shouldn't need benchmarks to work that much out - it's software development 101.
This isn't true. The overhead from the article comes from API Gateway, and has nothing to do with running faster/slow via a Lambda or EC2 instance. The real performance hit in Lambda comes from cold-starts. But if you're willing to accept minor cold starts for not running infrastructure 24/7 then it can be a huge benefit.
...and the language choice has a measurable impact on the cold start time. I've measured it - as have others[1][2].
You're right that it's not just about the language runtime though. I must admit I'd forgotten about the container spin up et al as well (thankfully the recent ENI fix for VPC lambda's has been released because that was threatening to bite us hard on one project).
That first link makes no sense. The C# is not reflective us my real-world usage in the slightest bit. The 2nd link is more accurate to my experience, and obviously contradicts the first link.
I haven't tested the new VPC stuff yet to know what the impact is.
> That first link makes no sense. The C# is not reflective us my real-world usage in the slightest bit. The 2nd link is more accurate to my experience, and obviously contradicts the first link.
They don't contradict each other. They're different graphs demonstrating different workloads. You'd be surprised just how much peoples "real-world usage" can vary from one customer to another.
> I haven't tested the new VPC stuff yet to know what the impact is
Pretty significant for high demand lambda's. Unfortunately I'm not sure how much of what I know is covered by NDA so apologies for not being more specific.
Ok, so I commented on the authors blog post and he linked me to the repo, I took his code and compiled it myself and was unable to reproduce the result he got, he also said:
> Yes, I'm also suspicious about my C# results at this point.
I'm not sure how he got the results he got. So it maybe worth him running the test again.
> AOT compiled languages like C# will run slower on lambda than pre-compiled on an EC2. You shouldn't need benchmarks to work that much out - it's software development 101. However (if) you then went on to compare JIT compiled languages on lambda vs EC2 you'd notice the performance is much closer to each other.
But that is what the author did. C# is not 'fully' AOT, it is JITed into MSIL on .net CLR, at least under normal circumstances. Does elastic beanstalk use .net native style binaries somehow?
I'm getting heavily downvoted for the C# point when benchmarks do demonstrate that C# does have a measurably slower cold start time than many other language runtimes; in fact slower than every JIT language. So I'm not actually wrong on that point.
> But that is what the author did. C# is not 'fully' AOT, it is JITed into MSIL on .net CLR, at least under normal circumstances. Does elastic beanstalk use .net native style binaries somehow?
He's running .NET on EC2, so no cold start times. Elastic Beanstalk is "just" another orchestration layer (and not a good one in my personal opinion but I can see why it might appeal to some people).
This is no secret. C# and Java are slower with lambda than scripting languages like Python and Javascript. Whether it’s slower overall, I haven’t seen benchmarks and outside of lambda, the startup time cost is usually negligible.
Slower overall very much depends on your workload and the way the code is written. However lambda is supposed to be short lived processes and AWS manage the concurrency so the actual performance difference of each language is mitigated somewhat for the average customer (edge cases will always exist).
It's also worth noting that if you're running the kind of processes that this does become a concern, then lambda is probably the wrong choice anyway. However, as always, the smart thing to do is build then benchmark (as the author did).
Also AOT compiled languages like C# will run slower on lambda than pre-compiled on an EC2. You shouldn't need benchmarks to work that much out - it's software development 101. However you then went on to compare JIT compiled languages on lambda vs EC2 you'd notice the performance is much closer to each other.
You say tech should be about the results rather than religion and while I do agree there is a lot of cult-like behaviours in IT (and there always has been - emacs/vi, GNOME/KDE/etc, Windows/Linux/MacOS, etc) organisations that really depend on their services genuinely do load test their stuff to hell and back to ensure they are getting the best price, performance, scalability (where required) and resilience too.
That last point is also important when looking at infrastructure. Serverless often (though not always - you need to look at it on a case by case basis) offer you better resilience than rolling your own with EC2 instances. I mean sure, you can span multiple AZs yadda yadda yadda - it's all doable. But again you have to factor in spin up times and spare capacity (eg if you lose an AZ and then have 100% of your traffic over 2/3s of your existing hosts, are they going to have enough spare capacity to survive the spin up time of additional hosts or are you going to lose everything like toppling dominoes?)
Ultimately though, if you predictable traffic and don't have a requirements for redundancy then you could just host it yourself for a fraction of the cost of running stuff in the cloud. The real technical challenge isn't hosting though, it's all the edge cases around what needs to happen when your typical hosting plan isn't enough.