> No because Lambdas are proprietary which means you can't run it in a CI or locally. Also, it becomes stateful if it pulls data from a database, S3 or anywhere else on AWS which it almost always does.
Lambda is a function call. So it makes no difference if it’s proprietary or not.
Are you saying that it’s difficult to test passing an object to a function and asserting that it’s functioning as intended?
Lambas are not simple functions because your environment is different in local compared to production.
If I run a Node.js function in AWS Lambda, my Node.js version might be different, my dependencies might be different, the OS is different, the filesystem is different, so I or one of my node_modules might be able to write to /tmp but not elsewhere, etc.
It's the reason people started using Docker really.
If you don't have the same environment, you can't call it reproducible or testable for that matter.
Nothing you mentioned has anything to do with the ability to test a Lambda. You’re trying to use limitations and restrictions as friction to backup your inability to test.
There’s a lot of annoying things about lambda. And a lot of stuff I wish was easier to find in documentation. But that doesn’t change the fact that Lambda is more or less passing an event object to your function and executing it.
Writing a function in node 12 and then running it on node 4 and throwing your hands in the air cos it didn’t work isn’t the fault of Lambda.
It's great to see that factual evidence is answered with ad-hominem by the Lambda hype crowd.
In any case, if you have a Node.js module or code with a native C/C++ build, that runs shell commands, that writes to disk (not allowed besides /tmp in Lambda) or makes assumptions about the OS, your "simple" function will absolutely return different results.
e.g: My lambda is called when somebody uploads an image and returns a resized and compressed version of it.
This is done using Node.js and the mozjpeg module which is dependent on cjpeg which is built natively on install.
If I test my function on my machine and in Lambda it's very possible that I get different results.
Also, certain OSs like Alpine which are heavily used for Docker don't event use glibc as compiler, so again, another difference.
"In any case, if you have a Node.js module or code with a native C/C++ build, that runs shell commands, that writes to disk (not allowed besides /tmp in Lambda) or makes assumptions about the OS, your "simple" function will absolutely return different results."
This is true, but it's not Lambda qua Lambda. That's just normal production vs. testing environment issues, with the same basic solutions.
Lambda may offer some minor additional hindrances vs. something like Docker, but I wouldn't consider that catastrophic.
You are absolutely right that you could recreate a similar environment to Lambda in Docker. But you would first need to reverse engineer Lambda's environment to discover how it is actually configured and the limits that are set.
Even if you did find a way, you would still need to keep it up to date in case AWS decides to update that environment.
Logged in to say that this has actually been done (not by me) and my team has been finding it very helpful for local “serverless” SDLC: https://github.com/lambci/docker-lambda . It‘s billed as “ A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.” We test our functions mocked and against local deployments of that lambci container . There also lambda “layers” (container images for building custom runtimes for AWS Lambda) but we have not used that feature at this point. Interesting space with lots of room for improvement in this tool chain though for sure
I’m not 100% sure as I didn’t create the image (though I’m evangelizing as someone who has found it truly helpful for daily dev.) . I believe the creators tarball’d the entire distro/execution environment from a running lambda so the file system layout and libs likely match Amazon Linux if that’s the default lambda execution distro image. If not I assume it matches the default
At least the Docker image used by AWS SAM CLI is created by AWS.
Also, you compile before packaging, so you dev/CI system already has to be able to compile for Lambda, independenly from testing/debugging with Docker.
> > Writing a function in node 12 and then running it on node 4 and throwing your hands in the air cos it didn’t work isn’t the fault of Lambda.
> It's great to see that factual evidence is answered with ad-hominem by the Lambda hype crowd.
I don't think that was a personal attack.
We've answered technical questions with technical answers.
- You have a definition of stateless which includes having no persistence layer, which is at best at odds with the industry.
- You think serverless was created with AWS Lambda which we've been kind about, but most people would say you're simply wrong.
- You're advocating for containers, which are well known for having their own hype as people write their own cloud providers on top of the cloud provider their employer pays for with dubious benefit.
Saying that local dev and Lambda are different is a strawman. How is that harder than developing on a Mac or Windows (or even Linux) and then testing on a different OS and config via CI/CD?
You shouldn't be testing "on your machine" - that's the oldest excuse in the book!
You should build your function in a container based on AWS Linux, just the same as you should for a Lambda deploy. That guarantees you the same versions of software, packages, libraries, etc. It makes it possible for me to develop Lambda functions on a Mac and test binary-for-binary to the deployed version.
"Nothing you mentioned has anything to do with the ability to test a Lambda" is not ad-hominem, it's a statement of fact.
Why not then have lambda run the same container you can run and test locally?
I don't use lambda but we have our jenkins spin up the same ec2 to run tests that we would spin up to do development so that we never run into this problem.
I'm not sure I understood your question correctly.
If you mean running a Docker container in Lambda, that is to may knowledge to possible.
You could schedule Docker tasks in AWS ECS (their managed container service) but it's not meant for anything realtime and more for cron job type tasks.
If you mean emulating the Lambda environment in Docker, then I wrote an answer with the difficulties of doing that below to another user.
No, your lockfile doesn't care about build steps so any post-install script might run differently for the many other reasons listed.
> Agree, but how much does one Linux+systemd different from other Linux+systemd? How much does the FS?
Plenty. For example filesystem change events are known to have filesystem and OS dependent behaviours and quirks / bugs.
When a Node module runs a shell command, it's possible that you have a BSD vs a GNU flavour of a tool, or maybe a different version altogether.
The Linux user with which you are running the function might also have different rights which could become an issue when accessing the filesystem in any way.
> VMs, docker and having to care about and manage isolation platforms is the reason people started using serverless.
Maybe, but serverless doesn't answer those questions at all. It just hand waves testing and vendor independent infrastructure.
Then you're not talking about dependency versioning are you? you're talking about install order. In practice it hasn't been an issue, I should find out how deterministic install order is but I'd only be doing this to win a silly argument rather than anything that has come up in nearly a decade of making serverless apps.
> For example filesystem change events are known to have filesystem and OS dependent behaviours
> When a Node module runs a shell command, it's possible that you have a BSD vs a GNU flavour of a tool
Are you generally proposing it would be common to use an entirely different OS? Or a non-boring extX filesystem?
All your issues seem to come from edge cases. Like if you decide to run FreeB or ReiserFS locally and run a sandbox it it, fine, but know that's going to differ from a Linux / systemd / GNU / extX environment.
> > VMs, docker and having to care about and manage isolation platforms is the reason people started using serverless.
> Maybe, but serverless doesn't answer those questions at all.
Serverless exists precisely to answer the question. I can throw all my MicroVMs in the ocean with no knowledge of dockerfiles, no VM snapshots, no knowledge of cloudinit, no environment knowledge other than 'node 10 on Linux' and get my entire environment back immediately.
> Then you're not talking about versioning are you? you're talking about install order.
I didn't mean build order but install scripts and native module builds.
The first type can create issues when external resources are downloaded (Puppeteer, Ngrok, etc.), which themselves have different versions or that fail to download and where the Node.js module falls back to another solution that behaves slightly differently.
The second type can occur when you have say Alpine Linux that uses MuslC and Amazon Linux uses GCC or when the native module tries to link with a shared library that is supposed to exists but doesn't.
> Are you generally proposing it would be common to use an entirely different OS? Or a non-boring extX filesystem?
I haven't checked but Amazon Linux by default uses XFS on EBS disks so I wouldn't be surprised if Lambda's used the same. So not a boring extX filesystem. ZFS is also relatively common.
> Serverless exists precisely to answer the question.
No, it clearly doesn't because your function will fail in local and succeed in Lambda, or the reverse, exactly due to the issues I mentioned in my various comments here and you will be left debugging.
Debugging which starts by finding exactly the differences between the two environments which would have been solved by a VM or Docker.
> I didn't mean build order but install scripts and native module builds.
OK. Then you're still not talking about your dependencies being different. The dependencies are the same, they're just particular modules with very specific behaviour...
> external resources are downloaded (Puppeteer, Ngrok, etc.), which themselves have different versions or that fail to download
That's more a 'heads up when using puppeteer' than a indictment of serverless and a call to add an environment management layer like we did in 2005-2015.
> Linux by default uses XFS on EBS disks so I wouldn't be surprised if Lambda's used the same.
That's worth checking out.
> Debugging which starts by finding exactly the differences between the two environments which would have been solved by a VM or Docker.
I see what you're saying, but planning your whole env around something like a given puppeteer module dynamically downloading Chrome (which is very uncommon behaviour) isn't worth the added complexity.
> No, your lockfile doesn't care about build steps so any post-install script might run differently for the many other reasons listed.
You shouldn’t be uploading node_modules folder to your deployed lambda so this is an issue of your development environment, not lambda.
> Maybe, but serverless doesn't answer those questions at all.
“Serverless” or lambda/Azure functions, etc, are not a silver bullet that solve every single scenario. Just like docker doesn’t solve every single scenario, not does cloud servers or bare metal. It’s just another tool for us to do our job.
Lambda is a function call. So it makes no difference if it’s proprietary or not.
Are you saying that it’s difficult to test passing an object to a function and asserting that it’s functioning as intended?