Just worth pointing out that Azure (Microsoft's cloud platform) also has a Serverless option in the form of a cloud platform capability called "Azure Functions". So check out the competitor here: https://azure.microsoft.com/en-us/services/functions
Thanks! I have been using webjobs for a while and found the experience a bit scattered and subpar(especially the scheduling). Hopefully azure functions are better in this regard.
We've been using Functions for about a year now with great success. You can access nuget, code in a bunch of languages, and trigger in a bunch of ways. Also, you can write them from visual studio and then deploy to azure if you're doing something fancy and don't enjoy debug'n in Azure.
I don't believe it's 1m... it's 1m depending on the memory configured for the lambda and the total execution time per lambda. If every lambda takes 150ms to run you will only get 500k requests.
> AWS Lambda is now generally available. The AWS Free Tier includes 1 million free requests and up to 3.2 million seconds of compute time per month with AWS Lambda [0]
Minor nit, it's 1M requests and 400,000GB-seconds of runtime. Running a 128MB instance gets you the 3.2M seconds number, but if you need more memory then that number drops fast. Assuming roughly 1 second per-request you max out at 384MB of memory to stay in the free tier.
Would be great if you could post some total latency numbers from client to server and back, as in my experience API Gateway is extremely slow (AWS employees admit as much in their forums). Another problem is that once the Lambda is 'warmed up', if you have scaling needs beyond the 1 container it's allocated then you again have a 'cold start latency' problem. Interested to see whether this had an effect on your setup also (and as a side note, I'm extremely impressed with the potential for Lambda and serverless, but it still doesn't seem ready for prime time IMO).
I'm using gateway to proxy requests to IIS, Lambdas, and S3/CloudFront. It's not slow at all for me. It's also nice having a single domain and no longer dealing with CORS.
Lambda via API gateway requires a lot of traffic to be fast. From what i've read, it's because of the SSL handshake between the Cloudfront POP (which all API Gateways use automatically) and Lambda. Only when you have a lot of traffic will enough SSL sessions be cached to get a good hit rate.
(I haven't tested this myself, just what I've gleaned from the forums.)
Yeah, I'm skeptical about that though, as I still see lowest requests in the 400ms range when hitting the API Gateway with serious load (say 100k requests running at a 400 req. concurrency level). Also, even if you cache SSL sessions you still have the cold start problem when your Lambda scales beyond the container it's initially running in if you have a large number of concurrent requests.
That said, I'm excited for the potential of serverless offerings - whether using containers or any other implementation mechanism. I'm building a backend for a native app right now and the initial beta version was using Lambda. The slow responses really made it tough though so a move to GKE and Kubernetes have made response times a lot lower, and it actually scales faster too.
Spin up time of WebAPI is shown here, but returning hard coded values isn't much more than a proof of concept. Once you connect to data, you have to look at spin up time of something like EF and context/model generation over and over. What happens when you get 20 controllers with many function and routing actually has to do some work.
Lambda also prevents any long running optimization such as output cacheing.