Economic models are complex and far from perfect, and we're still waiting for Hari Seldon's psychohistory models to be created to tie together macroeconomics and macropsychology.
I have bad things to say about him. But they're firmly on pause. What Trump wants for the Federal Reserve is far worse.
And anyone who is a hard-currency quantity-theory-of-money conservative, should also be appalled by it.
Trump is way worse than what the harshest critics of the Federal Reserve think about it. Nobody right or left should support it. Only the billionaires will profit off the monetary disorder.
By design, kiss the ring. It’s a natural progression of the kind of grifting that has been occurring through 2025: shitcoin rugpulls, tariff announcements, etc.
> then attempted to murder a police officer with her car.
This is just false information. He was off to the left of her hood, and her wheels were hard to the right. He wasn't in front of her vehicle, she wasn't driving towards him, and she wasn't trying to murder anyone.
Maybe pg should come back to this board, and make HN his primary venue. Does he really like getting backscatter from all the bots and botlike humans on xitter? He could still syndicate there.
Meanwhile, HN certainly could stand to use an opinionated benevolent dictator (or at least tone-setter), not mere "both sides" moderation (as heroic as it has been). With such an anchor we might be able to constructively discuss these problems without getting derailed by the handful of reactionary flamebaiters.
The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
It is unfortunately very true. For about 20 years I moderated a very large forum. We tried so hard to be even handed it was somewhat comical, and then one day I decided to just clean house. Things improved remarkably after that but there were always new people willing to see how far they could bend the rules. It's interesting how you get these new accounts on HN that immediately start lawyering with the rule book in hand. There is no way that that is organic.
Dan & Tom are so incredibly restrained, I'd be much more of a shoot-first-and-ask-questions-later type because the longer such behavior goes on the more people will believe it is acceptable.
> Also /bin vs /sbin believe is that the latter is meant for statically linked binaries such that if your system is corrupted these at least will keep working.
I think that became the rationale for /[s]bin vs. /usr/[s]bin (although based on the linked article, that may have been retconned a bit).
You were supposed to keep your root/boot filesystem very small and mostly read-only outside major updates. That meant that you could boot to a small amount of utilities (e.g. fsck) that would let you repair /usr or any other volume if it became corrupted.
I think the other poster is correct that stuff like fsck is supposed to go into /sbin because it is a "system" binary (but also statically linked since /usr/lib isn't mounted yet) and doesn't make sense to have in user $PATHs since nobody other than root should really be running that one.
Regardless, this is all deeply meaningless these days, particularly if you are running "ephemeral" infrastructure where if anything goes that wrong you just repave it all and start over.
Bloom Energy is a growing company which is just shifting towards profitability and positive free cash flow and earnings. Those stocks are expected to have silly P/E ratios. They haven't had 2 years of declining sales.
The biggest thing that gems could do to make rubygems faster is to have a registry/database of files for each gem, so that rubygems didn't have to scan the filesystem on every `require` looking for which gem had which file in it.
That would mean that if you edited your gems directly, things would break. Add a file, and it wouldn't get found until the metadata got rehashed. The gem install, uninstall, etc commands would need to be modified to maintain that metadata. But really, you shouldn't be hacking up your gem library like that ith shellcommands anyway (and if you are doing manual surgery, having to regen the metadata isn't really that burdensome).
I wrote some code to do almost this many years ago (if I recall correctly, it doesn’t cache anything to disk, but builds the hash fresh each time, which can still result in massive speed up).
Probably obsolete and broken by now, but one of my favorite mini projects.
(And I just realized the graph is all but impossible to read in dark mode)
100%. Optimising "bundle install" etc. is optimizing the wrong thing. You don't even need this to work from gems in general. It'd have solved a lot of problems just to have it work for "bundle install" in standalone mode, where all the files are installed to a directory anyway.
But in general, one of the biggest problems with Ruby for me is how $LOAD_PATH causes combinatoric explosion when you add gems because every gem is added, due to the lack of any scoping of require's to packages.
The existence of multiple projects to cache this is an illustration that this is a real issue. I've had project in the past where starting the app took minutes purely due to require's, and where we shaved minutes off by crude manipulation of the load path, as most of that time was pointless stat calls.
Yeah, chef-client got a lot faster, particularly on Windows, just by liberally using `require_relative` whenever possible and sprinkling `require "Foo" unless defined?(FOO)` all over the codebase.
At runtime rather than install time yes. I did some prototyping on this back in the day, one of the issues is the language lacks an efficient data structure to store that information and you can’t (easily) build one efficiently because instances are too heavy.
The more I write code in other languages where I think hard about ownership ("does this method ultimately grab the object and throw a ref onto some long-lived data structure somewhere? Then it owns it, so I better clone it") the more robust my code in other languages generally gets. Same with mutation. Generally better to make a copy of something and then mess with it and throw it away than to try to mutate-then-unmutate or something like that, even though it might in principle be nanoseconds faster. Eliminate loads of spooky-action-a-distance bugs where things are getting mutated in one spot and used in another spot, when there should have been a copy in there somewhere.
> Eliminate loads of spooky-action-a-distance bugs
This line of thinking so sickens me. Many things are not easy when done right. That is no excuse to avoid understanding how to do them right. Sure, making endless copies is easier. But this is why machines now need 16GB of ram and four cores to run the calculator.
> But this is why machines now need 16GB of ram and four cores to run the calculator.
This has more to do with the fact that the web stack has become the de-facto app development platform, and thus we inherit the bloat and optimization oversights of that platform.
You're not going to make a 16GB calculator just because you personally prefer copies over shared ownership in a language that gives you the tools to avoid bloat in a myriad of ways.
reply