Hacker Newsnew | past | comments | ask | show | jobs | submit | evancordell's commentslogin

We always called these "four-phase migrations". An old Stripe article used similar naming[0].

[0]: https://stripe.com/blog/online-migrations


I’ve heard that one too. I think the key insight is that you need to “stop the bleeding” (stop creating more old data that needs to be migrated) before you do any backfilling. That’s why I always called it dual writing because that’s the stop the bleeding step.


This video is a great overview of the history and the recent hearings, came here to link it.

Not sure I agree with his conclusion though - once all manufacturers are required to include the technology, surely they will still compete on price and find ways to get cheaper models to market? They will be unencumbered by the risk of patent violation to innovate on cheaper approaches to the same problem.

He also argues for riving knives and blade guards as an alternative, which are great, but not all cuts can be made with them in place.

As a hobby woodworker that sometimes makes mistakes, I've wanted a SawStop for a long time but have been stymied by the cost, so maybe I'm just being optimistic.


At the risk of making a classic "I have a few qualms with this app" blunder, I'm not super clear on what this has over other durable workflow solutions.

It seems to take the durable workflow idea and lock you into a specific language, operating system, and database, when other projects in the same space give you choice over those components.


Co-founder here. Thanks for the questions! A couple advantages DBOS has:

1. It's a serverless platform. It manages application deployments, providing a simpler experience than trying to bolt a durable execution framework onto an app deployed on Kubernetes.

2. Transactional guarantees. DBOS does durable execution in the same transactions as your business logic, so it guarantees exactly-once execution for most operations while other workflow solutions are at-least-once. More details here: https://docs.dbos.dev/explanations/how-workflows-work

3. Database time travel, which greatly enhances observability/debugging/recovery. More details here: https://docs.dbos.dev/cloud-tutorials/timetravel-debugging


> The community that formed around building open source “standard” Macaroons decided to use untyped opaque blobs to represent candidates.

I assume "candidates" was supposed to be "caveats" - and as an author of a "standard" macaroon implementation, I completely agree that this is the biggest downfall of Macaroons. With no common caveat language (and no independent "dischargers") it really limits their use to within a single org. And at that point you're basically asking everyone to invent their own token format anyway.

Though I don't personally use them much anymore - I think the use-cases for Macaroons are much more limited if you have a Zanzibar! - I appreciate seeing Macaroon discussions pop up and this post and the related discussions it linked out to were a great read.


Zanzibar and macaroons are actually pretty complimentary.


To be fair, this is a mistake that started with the Google paper, and everyone else just copies the mistake.

The paper calls them Macaroons as a play on (browser) Cookies with layers (of caveats) - so clearly they meant macarons as well, since a macaroon doesn't have layers. Or at least, that's always been my interpretation of the name. It's possible it was just an arbitrary play on hMAC cookies and not the layers?


This is interesting, I have the opposite opinion. I dislike helm for public distribution, because everyone wants _their_ thing templated, so you end up making every field of your chart templated and it becomes a mess to maintain.

Internal applications don't have this problem, so you can easily keep your chart interface simple and scoped to the different ways you need to deploy your own stack.

With Kustomize, you just publish the base manifests and users can override whatever they want. Not that Kustomize doesn't have its own set of problems.


Kustomize also supports helm charts as a "resource" which makes it handy to do last mile modifications of values and 'non value exposed" items without touching or forking the upstream chart.


There are good common libraries which expose every property by default, so you dont need to make everything template-able yourself

https://github.com/bjw-s/helm-charts/tree/main/charts/librar...


How would you feel if you could use Starlark (if you're familiar with it) to parameterize a la Helm, and then can add more Starlark commande later to update previously-defined infra a la Kustomize?

Full disclosure: our startup is trying to build a tool where you don't have to pick, so trying to test the hypothesis


> my wife is still receiving bills from the birth of our most recent child, 18 months ago

I've been dealing with this as well, and the uncertainty has been the most frustrating thing.

Medical bills from the same institution should be required to be high watermarks - i.e. if you give me a bill in March, you can't send me a bill in April that has charges from February that _weren't on the bill from March_. It feels like fraud (and maybe it is, but who has time to figure that out?)


Also a happy Lutron fan, but I went with RadioRA2. It's a bit "smarter" but it's very reliable, not connected to the internet, and some basics can even be programmed without the management software.

One thing that stands out with Lutron products is their use of a unique spectrum[0], unlike almost all other smarthome products that share the same noisy bands.

[0]: https://assets.lutron.com/a/documents/clear_connect_technolo...


Quality aside, it's the paywall that grinds my gears.

Wirecutter articles are essentially long, well-researched ads for the products they (affiliate) link to.

I always found these ads useful, at least as a starting point. But putting a paywall in front of the ads rubs me the wrong way, like an unwritten contract was broken.


I've seen the sentiment in this article pop up in a few places, which I'd summarize as: Policy languages like OPA and Cedar are fast to evaluate and simple to write, so you should use it for all of your authorization needs.

But policy engines are only really fast and simple if they already have all of the data they need at evaluation time.

If you look at the examples in the Cedar playground[0], they require you to provide a list of "entities" to Cedar at eval-time. These entities are some (potentially large) chunk of your application's data. And while the policy evaluation over that data may be fast, the round trip to your database is probably not. And then you start to think about caching, data consistency, and so on, and suddenly you're thinking about a lot of the problems that Zanzibar was designed to address (but you're on your own to build it out).

IMO policy engines are best suited for ambient request data: things you already know about a request because of a session, a route, or a network path, and policies that make sense to manage on the same lifecycle as your application.

Disclaimer: I work on SpiceDB[1], a Zanzibar implementation, but I do also like policy engines.

[0]: https://www.cedarpolicy.com/en/playground

[1]: https://github.com/authzed/spicedb


> And then you start to think about caching, data consistency, and so on

If you are looking at OPA or Cedar as a standalone engine, this is the correct assumption. To avoid this hassle, there is an open-source tool called OPAL[1] that will let you run the policy engines with all the sync work without any investment in custom solutions. OPAL has a ready mechanism for data fetching and synchronization, so you can plug it into your application's data and not worry about the data.

Disclaimer: I'm one of the OPA maintainers.

[1] https://github.com/permitio/opal


The article was comparing OPA/Cedar to Zanzibar, which is why my head went there. I did go looking for info on how OPAL deals with caching and consistency and found these:

- Authz data is kept in memory, so what you can authorize over is limited by the memory of the box you run OPAL/OPA. The docs also mention sharding, but I'm not clear on how you actually do that with OPA. [0] Maybe there's another doc that I missed.

- You can get a token representing the last time data was synced to the cache in an OPAL health check, but I'm not clear on how you'd use it to ensure consistency in your application since hydrating the cache is asynchronous. [1]

Anyway, those are the types of things Zanzibar is concerned with, so that comparison (instead of Cedar) would've made more sense to me. Without spending more time on it, I'm not sure if I've represented OPAL correctly above, that's just what I found when I went looking.

[0]: https://docs.opal.ac/faq/#handling-a-lot-of-data-in-opa

[1]: https://docs.opal.ac/faq/#how-does-opal-guarantee-that-the-p...


> I'm not clear on how you actually do that with OPA The sharding is managed from the OPAL control plane, when you configure the data sources you also configure the way the sharding works.

> ensure consistency in your application since hydrating the cache is asynchronous. OPAL use eventual consistency for cache reliability, you can know that data has changed, even before you know what changed.


> If you look at the examples in the Cedar playground[0], they require you to provide a list of "entities" to Cedar at eval-time. These entities are some (potentially large) chunk of your application's data.

This is a primary reason we stopped looking at AWS Cedar. If you don't know all of the policies that might apply to your request (b/c policy authors might be different than dev teams), how do you know what entities need be sent in the request context? And in a authz system with many different entity types (and stores), gathering them all, even if you know which ones to get, would be non-trivial. Repeat for every system using Cedar, or build some SPOFish thing in the middle.

That, and pricing seemed pretty terrible for us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: