When exists? was removed, it was roughly an order of magnitude more popular in Ruby code on Github than exist?. I don't think you can argue from the position of improved readability when, given the choice, it was what the majority of people expected, wrote, and had to change. This change in Ruby is pretty difficult to defend. It didn't really do much but break people without giving them anything in return. It didn't improve maintainability of Ruby itself, it didn't make maintaining Ruby code easier, it didn't advance any secondary goals to improve Ruby.
10 years is not an especially long time period for a software project to be maintained. There's a reason the Linux project is so emphatic that it never breaks user space.
I agree. It would be simpler to shuffle the list of people, then split the list in half.
Here's a proof this algorithm doesn't work by counter-example (N=6)
Consider a list of 6 elements. Elements 5 and 6 must be in the same bucket 50% of the time and different buckets 50% of the time. For this to be true, after we place the first 4 elements into their buckets according to this algorithm, there must be space left in both buckets 50% of the time and in only one bucket 50% of the time.
Sequences of the first 4 coin flips where neither bucket is filled, followed by possible ending sequences, and the odds of the prefix.
AABB(AB, BA) = 1/16th
ABAB(AB, BA) = 1/16th
ABBA(AB, BA) = 1/16th
BBAA(AB, BA) = 1/16th
BABA(AB, BA) = 1/16th
BAAB(AB, BA) = 1/16th
Total: 3/8ths
Sequences of the first 3-4 coin flips where one bucket is filled, followed by possible ending sequences, and the odds of the prefix:
AAA(BBB) = 1/8th
BBB(AAA) = 1/8th
AABA(BB) = 1/16th
ABAA(AA) = 1/16th
ABBB(AA) = 1/16th
BBAB(AA) = 1/16th
BABB(AA) = 1/16th
BAAA(BB) = 1/16th
Total: 5/8ths
Since one bucket is filled 5/8ths of the time after 4 elements are processed according to this algorithm, the final two elements will be in the same bucket 5/8ths of the time, not the expected 4/8ths of the time.
The ability to depend on library versions that do not exist is a misfeature. It should not be possible for someone to build a new version of their software and cause your software to cease building or running.
This doesn't just result in non-reproducible builds, but it results in them at unpredictable times and, if you have servicing branches of your code, backward through time. This is not a good property if you need to know what you are building today is the same as what you built yesterday, modulo intentional changes, or even that it will build or run.
AWS provides the ability to guarantee you don't share physical hardware with other customers with "EC2 Dedicated Instances" (https://aws.amazon.com/ec2/purchasing-options/dedicated-inst...). I'm not familiar enough with other cloud provider offerings to say if they do or do not have similar features.
Generously assuming a cashier can check one item a second for an hour, he can check only 3600 items. "Tens of thousands" seems off by at least an order of magnitude.
I was thinking of some costco cashiers I've seen, and they do far more than one item per second. That said, when you consider the downtime of processing the transaction, your estimate is probably not too far off. It makes the low end cost of transponders comparable, though just.
In New York, the "Supreme Court" is the trial-level court. Above that is the "Supreme Court, Appellate Division" and above that is the "Court of Appeals".
10 years is not an especially long time period for a software project to be maintained. There's a reason the Linux project is so emphatic that it never breaks user space.