Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Complex dynamic websites on S3?
13 points by rajamaka on Jan 4, 2023 | hide | past | favorite | 14 comments
Preface: I'm just a mere mortal DevOps engineer who hasn't gone down the path of creating web app from scratch before until now...

I am wondering if many people have ever gone down the route of creating complex dynamic websites on S3. I'm scoping out options for a quick way to bang out a POC and I'm not particularly seeing any reason for going down the traditional webserver route when I feel as though I can properly leverage my own skills by busting out the backend purely running on AWS/ApiGateway/Lambda and interacting with a Js/React/S3 frontend.

Firstly I am wondering if deploying anything beyond a basic static site on S3 is a fools endeavour from the context of any webdev jockeys around here.

Secondly if not - is there any front end tooling that is specifically designed around this area?

Apologies if a dumb question.



We do exactly this with a React app served from S3. Just think of it as a (mostly) dumb webserver. It's not gonna do any compute, or server side rendering/includes.

If you need server side compute, lambda and API GW is the way to go.


If with "dynamic", you still mean client-side rendered, then yes, it's the same as deploying any other static website with or without JavaScript.

You can follow a tutorial like this one: https://www.newline.co/fullstack-react/articles/deploying-a-...


You are rediscovering the Jamstack (https://jamstack.org/) which is a developer-efficient way of getting stuff out there and done. The only disadvantages are “broken without JS” and possibly load times (hence you hear about “prerendering” to address that.


Edit: SSR might be the term not prerendering


Some tips if you go this way:

- Use CloudFront in front of your S3 bucket, in this way you get custom domains and HTTPS and decrease requests (and its associated costs) to the S3 origin. Don't bother with S3 website hosting, unless you're using another CDN to front the website. Even then, I'd rather use CDN -> CloudFront -> S3 just to prevent setting the bucket as public and leverage some behaviors (see below)

- On CloudFront, remember to set a default root object (this is the index.html file)

- If you are using a router on your SPA, set CloudFront to handle 403 and 404 errors as custom responses - return 200 with the path set as your /index.html

- Getting 403 on CloudFront? Check your bucket policy first!

- Page not updating? Remember to invalidate the cache after updating the files in the bucket.

- Want to avoid CORS shenanigans? You can add your API Gateway endpoint as an custom origin with no caching on the same distribution but on another path.


I've done it, but I don't recommend it for anything other than a learning exercise. Everything is set up so that it looks like it should be easy, but you actually have to finagle quite a few things to get it to work right, and making even small changes is a huge chore.

Admittedly, it has been a few years since I last tried this, perhaps things have improved or there is better tooling out there.


Could you give an example or two of unexpected things that need finagling? Planning something similar so a heads up would be good


Yeah, basically anything that requires security, like say a login system, is a pain, and there are a lot of server quality of life features that you don't get with S3. Basically with an S3 bucket, they get the file system, the whole file system, and nothing but the file system.

For the security issue, the way I organized it was to have a second s3 bucket that the lambda could access but not an external user for storing anything sensitive. AWS has some dedicated services like Cognito and IAM for dealing with user login and verification, which in theory should make a lot of stuff easier but I struggled with getting to interface with everything else.

For the data storage there were just a bunch of little hacks to deal with things like write order and file size as the lambda functions are stateless and the timing/ordering of their execution is not reliable. Luckily what I was building was never intended to scale but if it ever did those hacks would not hold up. I was unaware of it at the time but AWS actually has a simple queue service (SQS) which I haven't personally used but seems like it would fix a lot of the issues.

But yeah, really the issue is that aws splits up its functionality such that it is hard to do things unless you are using the one arcanely named service that is actually designed to do it, the services aren't very compatible with eachother (they use different naming conventions for the same things, and are generally structured as if the teams working on them never talked to eachother) and I personally find the documentation for a lot of these services to be very unclear. I'm sure some AWS power user would have a much easier time doing it, but I don't know why anyone would sink in enough time to get that familiar with all those different services when there are plenty of straightforward all-in-one options out there. Plus, I don't feel spinning up 13 different services is really in the spirit of the S3/Lambda site.

That all being said, I do really like S3 for static sites and honestly adding just a little bit of dynamic functionality like adding an email form is not too bad.


Many thanks for the write-up! Sequence of writes is not something I had considered. Hoping to get around most of the drama by leaning on KV workers from cloudflare


Does your application need SEO (Search Engine Optimisation) as well as dynamic content (something that depends on UGC)? If yes, then deploying the site via traditional server is the best way to do it.

Edit: if you don’t want to manage a server then you can use Next.js and Vercel and get same benefits as server rendered pages. So that’s an option too.


You'd probably spend less time on setup and infra by going with application platforms such as Netlify, fly.io, Firebase, platform.sh etc - but if you want to hone your AWS skills or have an AWS app skeleton and infra-as-code from work to recycle, why not.


not a dumb question, the state-of-the-art is nothing nailed against the wall that's there forever. It's in constant flux and development.

Using Next.js on Cloudflare and Cloudflare Workers (edge-functions) or deno is a basically as free and serverless as it gets. Cloudflare pages are free.

Here's an article on it: https://engineering.udacity.com/hosting-next-js-at-the-edge-...

Then there is: https://remix.run/


And if you want postgres, I highly recommend using Supabase to load dynamic content. They have a great free tier! I run my side projects on this setup - Cloudflare to host static html, supabase-js to load dynamic content. I'm not a js guy past jquery, so I generally hand-write my own javascript and build the static pages with some simple hugo templating.

Works great.


This.

> backend purely running on AWS/ApiGateway/Lambda and interacting with a Js/React/S3 frontend

Use Next.js to get a React frontend setup with a directory (/pages/api/) containing your backend serverless JS/TS code




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: