> Why is it catching on? In the best light because it actually is a tremendous productivity accelerator. In the worst light, because we live in a world that incentivizes "fake it til you make it".
It's great at a few things and pretty good at a lot of things. In my view, the thing that it's the absolute best at is churning out low value, rarely-read communication. There is a massive amount of that type of communication - spam, student essays, procedural documents for compliance. There are loads of jobs that need to do that sort of thing regularly and it's a godsend for them.
Honestly, I've had a great experience because I've read a lot of experiences like yours and stuck with pages router & Next 12, which works nicely for everything I've needed it to do.
There might be a point where app router is stable & smooth but it's pretty clearly not right now, so havn't really seen the need to upgrade. I think there was a pretty decent comms issue with the stability of it from both the Next and React teams, but I have a hard time faulting an otherwise fairly stable and useful framework for adding features when they're not breaking the existing stable path.
Hooks was a bit of a bumpy transition as well, but I do think that I prefer the code written with them to the code before them. I think it's OK to wait a year or two to let the rough edges get filed down when these types of frameworks release big new feature sets.
Edit: I'll note that we don't use next/image or API routes either, both of which I've seen some churn / pain with. Possible I just hit on the framework when it was in a pretty happy place and most of the new features or suggested defaults have had pain points that I havn't experienced.
My counterpoint to this is just that DDB is not a super usable piece of software (it's slow, buggy, and expensive). It's got the massive advantage of having the rights to sell D&D content, but there's definitely room for disruption in that market.
Edit2: For an example of why gamedev toolkits don't necessarily produce performant, highly usable software, check out Dungeondraft (https://dungeondraft.net/). It's built with Godot and gets the job done, but as an application it's a total mess. I'm working on an alternative but (surprise!) it's a challenge.
For me, the reason I prefer Google to GPT is that it's much easier for me to assess the credibility of a Google answer vs a GPT one. There are so many signals in any primary source. Some obvious ones are things like number of upvotes, site reputability, presence of (working) examples, when was the answer written? More intangible things are like how closely does this solution match my problem statement, does the author write in a trustworthy manner are also easy for me to pick up at a glance.
With GPT, I don't have any of that (or maybe I just need to re-learn it?)
Also, I get a useful answer from most Google queries. GPT performs at a significantly lower bar (at least right now) - it works well for some stuff but not others, and the time it takes to figure out whether it's going to do a good job (and maybe do a couple of rounds of prompt refinement) is much more than just Googling.
I wonder if there's something that you can do as you're nearing the end of the context window to summarize the "state" of the world so far and put it into context so that the model always has context on the most important details. I'd imagine that you'd lose some of the minor stuff by compressing context this way, but, hey, humans forget minor details too.
They cover different needs - Tailwind is a low-level CSS library that provides a different (and some claim better) way of styling HTML. It doesn't provide markup, interactivity, etc.
AntD is a high-level component library that provides components with pre-built markup, JS interactivity, accessibility, etc. Any of the component libraries in this thread are a good point of comparison.
You might use Tailwind if you're building out your own components, or styling a page that doesn't need much interactivity. It is fast, lightweight, and easy to integrate. You can compare it to any CSS-in-JS tool, SASS, and other styling solutions.
You'd typically turn towards a component library (like MaterialUI, Ant Design, or Tailwind UI) if you're looking to quickly build a webapp that needs a lot of interactivity out of the gate. These solutions are larger and heavier, but provide a lot more functionality (interactivity, accessibility).
I wonder if AWS will make more or less money from these outages?
Will large players flee because of excessive instability? Or will smaller players go from single-AZ to more expensive multi-AZ?
My guess is that no-one will leave and lots of single-AZ tenants who should be multi-AZ will use this as the impetus to do it.
Honestly, having events like this is probably good for the overall resilience of distributed systems. It's like an immune system, you don't usually fail in the same way repeatedly.
We (Netflix) begged them for years to create a Chaos Monkey that we could pay for. There were things we just couldn't do ourselves, like simulate a power pull or just drop all network packets on the bare metal. I guess not enough people asked.
CMaaS sounds amazing for resiliency engineering. There's so much I want to be doing to perturb our stack, but I don't know all the ways stuff can go wrong. Sure I can ddos it, kick services and servers offline, etc, but that's what, a few dozen failure modes? Expertise in chaos would be valuable by itself. Not to mention being able to shake parts of the system I normally can't touch.
Side note: terraform is pretty good for causing various kinds of chaos, deliberately or otherwise.
If my company is any indication, they're going to make more money since everyone will simply check the multi-AZ or multi-region checkboxes they didn't before and throw more money at the problem instead of doing proper resiliency engineering themselves.
It doesn’t matter how much of resiliency engineering you do. Having everything in a single AZ is a risk. If this is acceptable then it’s fine if not you need to think of multi az from day 1.
Auth0 ran in six AZs in two regions[1] and went down today[2], because they picked the wrong two regions. How many regions and AZs should someone pay for before they get reliability?
At a minimum they should have chosen regions not in the same time zone or general geographic area. US-West 1 and US-West 2 might well be safeguarding against a server failure but is not a disaster plan. If your customers are global, choosing multiple continents is probably prudent.
No one just "moves off" AWS. Once your apps are spaghetti coded with lambdas, buckets and all sorts of stuff, it's basically impossible to get off. More than likely, as you noticed, it will increase spending since multi-AZ/multi-region will become the norm.
No -- if they needed to they already would have migrated to a multi-region. If they don't need it -- they won't have. The reason is simple -- it's expensive as you say. I'm not a fanboi or evangelist of AWS either -- I do have pet theories they named their products with shit names in order to make more money by making AWS skills less transferable to Google Cloud etc. S3 should be Amazon FTP, RDS should be Amazon SQL etc.
Not at all the case. It was a regional outage that got Netflix to more than double our AWS spend going multi-region, so that outage netted them millions of extra dollars per year just from Netflix.
You’re underestimating the ability of eng leadership to not take these issues seriously. Only when there’s sufficient pressure from the very top or even the customers it takes a priority.
> There is no possibility that outages are good for AWS.
Do you know how many non-technical CEOs/boards/bosses have told their tech people that they need to go multi-region/cloud because that's what the one-paragraph blog and/or tweet told them to do in response to last weeks event?
Morally, I'm very pro-high density housing. Just like I'm pro-public transit. But in reality, my expressed preference is a single-family house with a yard and the convenience a car brings. I'm not sure how to square these things.
It's also the case that adding infrastructure for cars makes everything else (walking, forms of public transit other than bus) less convenient, by taking up space and making everything far apart.
That's fine if it you prefer it. The question is whether you can use the force of law to prevent other people from realizing their preference on their own property.
But the answer is obvious: if the force of law can be applied over home design preferences, there is no longer such a thing as private property.Yes, there is a clear need for building regulations related to safety, but that’s an entirely different argument.
The problem is that some of people's preferences include externalities. I prefer that my neighbor upstream not dam or pollute the creek that runs to my yard. I prefer that my neighbor behind me not put up a 100 foot concrete wall that blocks my view of nature. But it's their property, as you say. So people band together and decide that they want to live in a community that abides by certain rules and pass zoning laws, getting us where we are today. Externalities are a thing, and surely at least some zoning makes sense.
It's great at a few things and pretty good at a lot of things. In my view, the thing that it's the absolute best at is churning out low value, rarely-read communication. There is a massive amount of that type of communication - spam, student essays, procedural documents for compliance. There are loads of jobs that need to do that sort of thing regularly and it's a godsend for them.