Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
.NET Core Web API on AWS Lambda Performance (cloudncode.blog)
121 points by maingi4 on Feb 14, 2017 | hide | past | favorite | 20 comments


Just worth pointing out that Azure (Microsoft's cloud platform) also has a Serverless option in the form of a cloud platform capability called "Azure Functions". So check out the competitor here: https://azure.microsoft.com/en-us/services/functions


Huh, nice that one can actually try it online. BTW, Google also has a Lambda clone, but it's in Alpha: https://cloud.google.com/functions/docs/


Thanks! I have been using webjobs for a while and found the experience a bit scattered and subpar(especially the scheduling). Hopefully azure functions are better in this regard.


We've been using Functions for about a year now with great success. You can access nuget, code in a bunch of languages, and trigger in a bunch of ways. Also, you can write them from visual studio and then deploy to azure if you're doing something fancy and don't enjoy debug'n in Azure.


I wouldn't have deemed it impressive by 'load testing' AWS Lambda with 139 requests.

Also, a simple Google search would have shown the author that first 1 million requests are free in AWS Lambda, contrary to what he says.


Just a heads up, I think he meant the free tier at load impact.

While I love the service, you do end up paying pretty quick to scale up. https://loadimpact.com/pricing

As a note, we started using GOAD https://goad.io and Dino http://veldstra.org/2016/02/18/project-dino-load-testing-on-... - for running tests and has been great.


Goad is great! Glad to see another person who thinks it's as good a service as I do.


I don't believe it's 1m... it's 1m depending on the memory configured for the lambda and the total execution time per lambda. If every lambda takes 150ms to run you will only get 500k requests.


It's 1m requests and a compute time space limit

> AWS Lambda is now generally available. The AWS Free Tier includes 1 million free requests and up to 3.2 million seconds of compute time per month with AWS Lambda [0]

[0]: https://aws.amazon.com/lambda/pricing/


Minor nit, it's 1M requests and 400,000GB-seconds of runtime. Running a 128MB instance gets you the 3.2M seconds number, but if you need more memory then that number drops fast. Assuming roughly 1 second per-request you max out at 384MB of memory to stay in the free tier.


Yeah didn't want to get too bogged in to details. Though 384MB is all you'll ever need right? :)


Ahh, my bad.


Nice post!

Would be great if you could post some total latency numbers from client to server and back, as in my experience API Gateway is extremely slow (AWS employees admit as much in their forums). Another problem is that once the Lambda is 'warmed up', if you have scaling needs beyond the 1 container it's allocated then you again have a 'cold start latency' problem. Interested to see whether this had an effect on your setup also (and as a side note, I'm extremely impressed with the potential for Lambda and serverless, but it still doesn't seem ready for prime time IMO).


I'm using gateway to proxy requests to IIS, Lambdas, and S3/CloudFront. It's not slow at all for me. It's also nice having a single domain and no longer dealing with CORS.


Interesting - what's your P95/P99 like? I'm seeing ~800ms with a Hello World Lambda called from API Gateway.


It's a relatively new app being worked on since November, currently seeing ~400-500ms. But it's very dependent on what it's hitting.

Hitting Gateway -> Cloudfront -> S3 (for gzip / caching) it's ~100-200ms.

Hitting Gateway -> ELB -> EC2 -> IIS (.NET Web API) it's ~400-500ms.

Hitting Gateway -> Lambda -> NodeJS it's ~700-800ms.

Still need more time to in production to get more statistics.


Lambda via API gateway requires a lot of traffic to be fast. From what i've read, it's because of the SSL handshake between the Cloudfront POP (which all API Gateways use automatically) and Lambda. Only when you have a lot of traffic will enough SSL sessions be cached to get a good hit rate.

(I haven't tested this myself, just what I've gleaned from the forums.)


Yeah, I'm skeptical about that though, as I still see lowest requests in the 400ms range when hitting the API Gateway with serious load (say 100k requests running at a 400 req. concurrency level). Also, even if you cache SSL sessions you still have the cold start problem when your Lambda scales beyond the container it's initially running in if you have a large number of concurrent requests.

That said, I'm excited for the potential of serverless offerings - whether using containers or any other implementation mechanism. I'm building a backend for a native app right now and the initial beta version was using Lambda. The slow responses really made it tough though so a move to GKE and Kubernetes have made response times a lot lower, and it actually scales faster too.


Spin up time of WebAPI is shown here, but returning hard coded values isn't much more than a proof of concept. Once you connect to data, you have to look at spin up time of something like EF and context/model generation over and over. What happens when you get 20 controllers with many function and routing actually has to do some work.

Lambda also prevents any long running optimization such as output cacheing.


I think showing latency is useless without the corresponding throughput (req/s).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: