Harris was the vice president, and was therefore the closest thing to a small-d democratic choice amongst the available options. Otherwise... Why Harris and not Newsom?
The better choice would have been Biden stepping out earlier and having a real primary, of course.
That’s why Biden should have picked a vice presidential candidate who could run a good campaign in 2024. It’s not like Biden’s age was an unknown factor. Biden himself floated the idea of only running for one term back in 2020.
Agreed. If the incumbent doesn't run (term limited or retires), the next nominee is almost always the VP if they want the job. Just look at the history:
Roosevelt -> Truman
(not Truman -> Barkley: the exception)
Eisenhower -> Nixon
Johnson -> Humphrey
Nixon -> Ford
Reagan -> Bush
Clinton -> Gore
(not Bush -> Cheney, who retired)
(not Obama -> Biden, who temporarily retired)
Biden -> Harris
I can't fathom how a party can pick a VP who isn't an excellent future candidate. JD Vance?
Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.
With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.
With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.
Which in itself is annoying, IMO. It creates a whole separate set of problems. You need karma, so people post in karma-farming subs to get a few crumbs. Then you get auto-banned from a dozen of the top subreddits preemptively for farming.
Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.
We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.
This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.
It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.
Eh, even for a senior engineer, dropping into a new codebase is greatly helped by an orientation from someone who works on the code. What's where, common gotchas, which tests really matter, and so on. The agents file serves a similar role.
Except that most READMEs are seemingly written more for end-users than for developers; and even CONTRIBUTING files often mostly just document the social contribution process + guidelines rather than providing any guidance targeted toward those who would contribute. There’s a lot of “top-level architectural assumptions” detail in particular that is left on the floor, documented nowhere. Which “works” when you expect human devs to “stare really hard and ask questions” until they figure out what’s being done differently in this codebase; but doesn’t work at all when an LLM with zero permanent learning capability gets involved.
I strongly support this! For the last few years, I've been signing up as a Hugo voter, and read a bunch of great stuff that I otherwise would have missed. Sometimes the best books are a bit divisive, but still make the shortlist. (Saint of Bright Doors, for example...)
They don't actually solve the problem in 2 seconds - at that point, they are running on a sample of only 3,000 vectors! Then they get it down further, but still find it will take a loooooong time to get through all 3B:
"With these small improvements, we’ve already sped up inference to ~13 seconds for 3 million vectors, which means for 3 billion, it would take 1000x longer, or ~3216 minutes." ...which is about two days.
Depending on how 'one-off' the query is, sequential read is the right answer. The alternative is indexing the data for ANN, which will generally require doing the equivalent of many queries across the dataset.
On the bright side, smart folks have already thought pretty hard about this. In my work, I ended up picking usearch for large-scale vector storage and ANN search. It's plenty fast and is happy working with vectors on disk - solutions which are /purely/ concerned with latency often don't include support for vectors on disk, which forces you into using a hell of a lot of RAM.
On the other hand, we could probably convince Cory Doctorow to write a piece about how fentanyl is really about the enshitification of opiates.
reply