Hacker Newsnew | past | comments | ask | show | jobs | submit | sdenton4's commentslogin

On the one hand, a search engine is not heroin... It's a pretty broken analogy.

On the other hand, we could probably convince Cory Doctorow to write a piece about how fentanyl is really about the enshitification of opiates.


However, D_A is moving, while D_B can be stationary.

How is a stationary defense drone going to defend from a incoming attacking drone?

Harris was the vice president, and was therefore the closest thing to a small-d democratic choice amongst the available options. Otherwise... Why Harris and not Newsom?

The better choice would have been Biden stepping out earlier and having a real primary, of course.


That’s why Biden should have picked a vice presidential candidate who could run a good campaign in 2024. It’s not like Biden’s age was an unknown factor. Biden himself floated the idea of only running for one term back in 2020.

Agreed. If the incumbent doesn't run (term limited or retires), the next nominee is almost always the VP if they want the job. Just look at the history:

  Roosevelt -> Truman
  (not Truman -> Barkley: the exception)
  Eisenhower -> Nixon
  Johnson -> Humphrey
  Nixon -> Ford
  Reagan -> Bush
  Clinton -> Gore
  (not Bush -> Cheney, who retired)
  (not Obama -> Biden, who temporarily retired)
  Biden -> Harris
I can't fathom how a party can pick a VP who isn't an excellent future candidate. JD Vance?

> JD Vance?

JD wasn’t picked for having the best chances of winning in 2028. He was picked to cement MAGA within the GOP after Trump dies.


AI doesn't just hide your voice -- it improves it!

to hazard a guess: the better your coding model is, the less you have to assess your fundamentals, and thereby suffer arrested development.

I was thinking of our natural reluctance to adopt newer and better tools because of our comfort and expertise with old ones.

I know I should have experimented with LLMs sooner, but leaned into my instinctive "VIM has gotten me this far" attitude.


Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.

With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.


With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.

Reddit has more friction to sign up or post while new or low karma.

The main subreddits will basically shadowban you until your account is aged and has more than X karma.


This is why I don’t create a Reddit account or post there: there are so many rules that dissuade new accounts. I don’t even bother to try.

Reddit is fantastic, to me. It's worth the struggle to get past the initial bullshit.

There are a lot of flaws, though. Their appeal system is very broken, for instance.


Which in itself is annoying, IMO. It creates a whole separate set of problems. You need karma, so people post in karma-farming subs to get a few crumbs. Then you get auto-banned from a dozen of the top subreddits preemptively for farming.

Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.


maybe not overrun by spam, but the amount of bots I see on popular subs is definitely not 0

You don’t have a choice.

We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.

This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.

It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.


Eh, even for a senior engineer, dropping into a new codebase is greatly helped by an orientation from someone who works on the code. What's where, common gotchas, which tests really matter, and so on. The agents file serves a similar role.

Yup, readmes exist for a reason even for meat bags

Except that most READMEs are seemingly written more for end-users than for developers; and even CONTRIBUTING files often mostly just document the social contribution process + guidelines rather than providing any guidance targeted toward those who would contribute. There’s a lot of “top-level architectural assumptions” detail in particular that is left on the floor, documented nowhere. Which “works” when you expect human devs to “stare really hard and ask questions” until they figure out what’s being done differently in this codebase; but doesn’t work at all when an LLM with zero permanent learning capability gets involved.

I strongly support this! For the last few years, I've been signing up as a Hugo voter, and read a bunch of great stuff that I otherwise would have missed. Sometimes the best books are a bit divisive, but still make the shortlist. (Saint of Bright Doors, for example...)

They don't actually solve the problem in 2 seconds - at that point, they are running on a sample of only 3,000 vectors! Then they get it down further, but still find it will take a loooooong time to get through all 3B:

"With these small improvements, we’ve already sped up inference to ~13 seconds for 3 million vectors, which means for 3 billion, it would take 1000x longer, or ~3216 minutes." ...which is about two days.


Depending on how 'one-off' the query is, sequential read is the right answer. The alternative is indexing the data for ANN, which will generally require doing the equivalent of many queries across the dataset.

On the bright side, smart folks have already thought pretty hard about this. In my work, I ended up picking usearch for large-scale vector storage and ANN search. It's plenty fast and is happy working with vectors on disk - solutions which are /purely/ concerned with latency often don't include support for vectors on disk, which forces you into using a hell of a lot of RAM.

https://github.com/unum-cloud/USearch


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: