I think a more accurate version of this is: unit tests were not only per-method but also per functionality. This was often called BDD (Behavior Driven Development), e.g. Ruby's cucumber. Your intuition here is correct though.
I disagree with the "not only". The idea in xp is to write the test first. http://www.extremeprogramming.org/rules/testfirst.html You don't know how many methods/functions (if any) you're going to add to make it pass, so they're explicitly per-functionality.
Since the 90s, New Zealand laws have been written in clear, modern, accessible English. The end result is the broader population understands it more and can also reason about it while it’s up for debate before being passed.
I think the ambiguity in the first two amendments has to do more with the specific text rather than plain English itself being deficient.
I think the ambiguity of the first two amendments -- heck, the Fourth has been gutted almost to non-existence by all the exceptions the Supreme Court has made over the years -- is the desire of certain people, particularly ones who are in favor of government control, to control their fellow citizens in ways those citizens may very well be unhappy about.
This isn't just a Constitution problem, either: it happens with all law, to one degree or another, and in all levels of government, from HOAs all the way up to the Federal and even International ones.
The issue isn't the wording, though -- it's humans being human, for better and for worse. While we can try to mitigate the problems arising from humans being human, there's only so much we can do!
Sometimes it’s done to fit into an existing tool/database that has a preexisting limit. Or when the hash is used only as a locator rather than for integrity.
I'm guessing the decision comes down to ease of use for people to participate in mirroring. My underestanding is IPFS tends to require more infrastructure, and still requires someone to pin the data.
Many bittorrent clients let you click a button to continue seeding the data over time.
On the first point, OPA is much older than OpenFGA. To really illustrate the point, OPA became a graduated project about a year before OpenFGA had their first code drop in the public GitHub repo. The OpenFGA people are aware of OPA and I'm sure they learned from the tradeoffs OPA made.
To the main point, what you described reflects the current trends of authorization. Define a data model, define data that adheres to that model, write declarative rules that consume that model, make a decision based on those rules.
Where things really start to differ is the kind of data that they bind against and how do you write rules. E.g. OPA is often used for either ABAC (Attribute) or RBAC (Roles) while OpenFGA is looking at ReBAC (Relationships). Each has their complexity tradeoffs, depending on the system being implemented. How easy or difficult a system makes these kinds of checks has a significant impact on how you write policies.
Yeah, that's what I've noticed too. Conceptually, they're more or less same giving an option of RBAC, ABAC or ReBAC and each offer their own DSLs (e.g. Oso, Ory Keto etc) and deployment strategies. It's been a bit harder to pick one honestly but I guess I'll just have to just use them to find which one fits for me.
Not sure why that matters, but OpenFGA is an implementation of Zanzibar, which isn't exactly new. There are many similar implementations to choose from should one want to model authorization via a graph database.
Interestingly, the fastest CPU based network switches tend to do full kernel bypass. The kernel is generally slow compared to OVS and VPP, especially when they traverse over something like DPDK.
Kernel bypass in DPDK grants the application direct access to DMA buffers so that the kernel is no longer involved. This is not because the kernel is slow, but because many small syscalls are expensive and putting your entire app in the kernel is a bad idea.
There is no kernel bypass in wireguard-go, just a user-space implementation fast implementation with smart use of syscalls to minimize the overhead of being split between user-space and kernel-space.
With io_uring, DPDK-style kernel bypass might stop making sense altogether.
It depends on what you are trying to do though. I don’t think the kernel has an easy path to operating on a set of packet headers as a vector at this point. Not saying it can’t happen, but it’s an area where user space is already ahead.
For reference, there was a previous test that demonstrated 40gbps with ipsec between two pods on separate nodes in k8s where the encap/decap achieved 40gbps which was the line rate for the Intel NICs used.
I do agree that io_uring will negate the need for DPDK for many use cases though, it will likely be a much simpler path and more secure path than DPDK.
It's not "kernel is slow", kernel when left to its own devices is plenty fast, the reason is that when you want to make decision about packet in userspace (vs telling kernel what to do with it via various interfaces) that kernel logic would just be overhead.
It's similar for applications; if you can, say, decode whole DNS packet in one go, you don't really want kernel to spend time decoding UDP packet, then you decoding the rest of the packet; doing it in one step is much faster.
There are some applications where the ability to vectorize the headers and operate on them with SIMD help. These types of apps tend to pin a full core to do only packet processing though. Also, syscall are expensive. A lot of work is going into making the APIs async while avoiding syscalls.
By their nature as L2/L3 devices, I wouldn't expect switches to ever support Wireguard. I also haven't heard of any hardware Wireguard yet. The fastest implementation so far might be TNSR which just squeaks in under $2,000.
Really depends on what you consider a “switch” to be. Most of Mikrotik’s CRS series supports full fat RouterOS, which includes wireguard support. Though the CPU on the CRS line is much cheaper than the proper routers (CCR series), so if you’re trying to do much more than a basic firewall and NAT on a residential connection (can probably handle 1Gbps fine on most of them) performance will not be great (even my CCR2004 can only handle ~3Gbps of IPSEC traffic).
If you secure a loan for or lease a car, isn’t insurance mandatory for completion of the transaction? If so, how are most people still driving Kias off the lot if they can’t get insurance?
Ora are people getting insurance and finding their policies unrenewable?