Hacker Newsnew | past | comments | ask | show | jobs | submit | quacker's commentslogin

They don’t because of at-will employment. It’s just sort of the more moral, empathetic, right thing to do instead of leaving them with no income, no insurance, etc.

Good bread is everywhere in major cities in the US. There are bakery sections at grocery stores and there are many local bakeries.

> There are bakery sections at grocery stores

There are, and most of them don't have good bread. (Baguettes are about the only good bread that you can reliably expect to find in them. Sometimes they have San Francisco-style sourdough, which in my experience, tastes like someone dumped a shot of lemon vinegar into it. Just because a bread uses sourdough starter doesn't mean it needs to taste sour. I feel much the same way about hops and beer.)

Regularly visiting the bakery is, for reasons I've mentioned, a lot of friction for one purchase.

My closest one carries... Weird specialty hipster breads (because it is more focused on tarts and pastries and sweets - bread is just an afterthought for it).

The one I'd go to, if my closest grocery weren't stocking them is way out of my way. I would not be making that trip twice a week.


> Regularly visiting the bakery is, for reasons I've mentioned, a lot of friction for one purchase.

That is still not "really hard to come by" as per your original claim. It's very common (not just in large cities!) to have a local bakery where you can get good bread. Whether you choose to go or not, it is available to you.


I mean, let’s at least discuss this in good faith.

“Good” bread according to the majority and bread that is specifically up to your standards are probably two very different things.

My grocery store’s bakery sells many types of fresh bread: sourdough, white, rye, croissants, ciabatta, buns, rolls, bagels, and so on. Many grocery stores in my city have a bakery section with a selection of fresh bread like this. (Even Walmart I think, but I don’t shop there).

It’s not the best bread I’ve ever eaten, but it’s fresh, good, tasty bread. It’s not “mushy garbage” and it’s not “cake” like you described in your original comment. It’s not “weird specialty hipster” bread. It’s just simple, real, fresh bread.


My family pricing went up by 20%, from $59.88 USD to $71.88 per year.

I like 1Password a lot. I've used it for 10 years. It's never lost a single thing, and I don't recall any downtime that impacted me. It's easy to setup and 99% hassle free. Works on my various device types (windows, mac, ios). It supports passkeys and 2FA codes. I like having shared and private vaults. I love the ability to share an auto-expiring, one-time-view link to a password. And the billing is a simple subscription fee.

I could do without some bloat. Watchtower feels like an enterprise need that is otherwise low-value and (by default) noisy for individuals/families. I obviously don't need "AI" forced into my password manager. I didn't love the version 7 to 8 transition that required a new app/extension to be installed. But all of that is really not so bad.

So yeah, I don't feel like I'm getting any additional value that justifies the price increase, but it's still more than worth it for me.


You mean they didn't increase prices in 10 years? A 2016 dollar is not the same thing as a 2026 dollar

Oh true. Considering inflation, $60 in 2016 is about $80 in 2026 so really the price has gone down in real terms.

(Not actually sure about the price history of the family plan or when family was introduced. I was originally on the individual plan and it was $35 then, and switched to the family plan in 2022. I don’t think prices have changed though)


1Password 8 looks like it was released around 2022. 1Password 7, which seemed to get support until sometime in 2023 supported local vaults and syncing yourself (via Dropbox or whatever).

So it’s really only been about 3 years since people were forced to get accounts with subscriptions, and now it’s going up 33%.

I still have the zip archive of 1Password 7 in my applications folder that the v8 upgrade created. It hasn’t been very long.

From my vault, I can see I got 1Password 7 in 2018. Using 2016 as the price anchor seems generous when subscriptions weren’t required in 2016.


> A 2016 dollar is not the same thing as a 2026 dollar

Indeed, in part because companies keep raising prices


It's a good idea. There are many studied benefits to (intermittent) fasting, for example: https://pmc.ncbi.nlm.nih.gov/articles/PMC11262566/


I don’t agree.

She has posted publicly about her condition.

He is 25 years old and trying to cope with a hard life event. Let’s not act like it doesn’t affect him. It affects everyone around her and the strong reaction from him is really a positive reflection on her, isn’t it?

His post is written and edited to garner sympathy and support. I don’t mind that for a naive but noble cause. And there is always a slim chance of success.


Supposedly there is no data shared with Google when using Gemini-powered Siri:

Google’s model will reportedly run on Apple’s own servers, which in practice means that no user data will be shared with Google. Instead, they won’t leave Apple’s Private Cloud Compute structure.[1]

1: https://9to5mac.com/2025/11/05/google-gemini-1-billion-deal-...


Supposedly and reportedly that is true. For now.

We still have Google models running on hardware people pay thousands of dollars for, under the impression it wasn't a Google device.

Imagine the gigantic temptation of gigantic wads of cash Google would pay Apple to allow Gemini to index and produce analytics about your data on your machine.

Now Google have a foot in the door.


This needs more detailed data that normalizes for the amount of food (price per calorie or price per weight or something like that).

Yes, a bowl at chipotle in the US might be 2x the price (more, probably) of a Japanese bowl, but it matters if I am getting 2x the calories also.

And there are foods in the US that are technically as cost effective, although maybe not as nutritious, like pizza which they mention, that can be around $1-$3 per slice. (Not my first choice for a lunch, but I could pickup a large 3 topping dominos pizza for $10 and make 3-4 lunches out of it, for example)


> In Japan, workers rely on healthy lunch bowls for under $4

The title doesn't capture that, but the issue is not that the US can't produce $4 lunches. It's it can't enable cheap(er) healthy lunches


I'm not sure what your point is. Is it about the lunches being specifically healthy?

A rice bowl at Chipotle, for example, is not unhealthy (rice, beans, meat, vegetables). Plenty of restaurant food in the US is perfectly healthy (or, you can look at nutrition facts to know if it is). And if I can take a single US portion size and split it into two lunches that are Japanese-sized portions, then maybe we're getting the same amount food per dollar.

And on the "healthy" point: The article doesn't discuss nutrition facts at all or refer to any specific meals or dishes.

They link to an article concerning the price of Japanese bowls, that mentions "a regular-sized bowl of rice with beef from Japanese fast food chain Yoshinoya, which costs around 468 yen (S$4.25)." I don't know Japanese so it's hard for me to find nutrition information about that particular dish, but I suspect that a beef bowl is high in saturated fat, cholesterol, and sodium (because most stir-fried beef is higher in these things). Is that healthy? Japan as a country has higher sodium intake than the US. Is that healthy? And so on. I suspect a big factor of the "health" of these lunches is that portion sizes are just smaller than in the US (but I have no data).


I think just statistics about how many people are overweight and obese in both countries can already paint a picture that probably japanese food is more healthy. And optimizing for how many calories you can get for $1 is probably also not the best metric to aim for.


Sort of for sake of argument: National obesity statistics don’t necessarily imply anything about the healthiness of the food, nor specifically about the healthiness of $4 lunches that the article discusses. If the Japanese eat smaller portions and are less sedentary, they could still be less obese regardless of differences in the nutritional content of these $4 lunches. (And I think they ARE less sedentary and DO eat smaller portions.)

I’m not advocating for anything (certainly not optimizing for calories per dollar).

My point is just that the article has no data. It says a Japanese lunch is cheap and a US lunch is expensive and doesn’t consider what you actually get for the money. It assumes the US lunch is a worse deal, but I suspect it’s really not if you adjust the price for the amount of food.


Using git history as documentation is hacky. A majority of feature branch commit messages aren't useful ("fix test case X", "fix typo", etc), especially when you are accepting external contributions. IF I wanted to use git history as a form of documentation (I don't. I want real documentation pages), I'd want the history curated into meaningful commits with descriptive commit messages, and squash merging is a great way to achieve that. Git bisect is not the only thing I do with git history after all.

And if I'm using GitHub/Gitlab, I have pull requests that I can look back on which basically retain everything I want from a feature branch and more (like peer review discussion, links to passing CI tests, etc). Using the Github squash merge approach, every commit in the main branch refers back to a pull request, which makes this super nice.


As far as I'm concerned, the git history of a project is the root source of truth of why a change was made, at that point in time. External documentation is mostly broad strokes, API references, or out of date. Code comments need to be git blamed anyway to figure out when they were added, and probably don't exist for every little change. Pull requests associated with a given commit give the broad description of "what feature was being implemented or bug was being fixed" for a given change, but a commit message tells me what, specifically, during that work, triggered this particular change. I want to know, for example, that the reason this url gets a random value appended to it is that while implementing a new page to the site it was found that the caching service would serve out-of date versions of some iframe. It never made it out of dev testing, so it never became a full-blown bug, it wasn't the purpose of the feature branch, so it wasn't discussed in the PR. But the commit message of "Add some cache-busting to iframe" (even something that brief), can go wonders to explaining why some oddity exists.


Agree to disagree I guess, but IME, git history is good for low level detail, not for high level information. Git history is a poor source for understanding architecture, code organization, and other aspects of the codebase. More often, git commit messages tell me what changed - not why the change was made or who it impacted or etc.

Reading through git history should be my last resort to figure something out about the codebase. Important knowledge should be written somewhere current (comments, dev docs, etc). If there is a random value being appended to a url, at least a code comment explaining why so I don’t even have to git blame it. Yes, these sources of knowledge take some effort to maintain and sure, if I have a close-knit team on a smaller codebase, then git history could suffice. But larger, long-lived codebases with 100s of contributors over time? There’s just no possible way git history is good enough. I can’t ask new team members to read through thousands of commits to onboard and become proficient in the codebase (and certainly not 5x-10x that number of commits, if we are not squashing/rebasing feature branches into main. Although, maybe now an LLM can explain everything). So I really need good internal/dev documentation anyway, and I want useful git history but don’t care so much about preserving every tiny typo or formatting or other commit from every past feature branch.

Also iirc, with github, when I squash merge via the UI, I get a single squashed commit on main and I can rewrite the commit message with all the detail I like. The PR forever retains the commit history of the feature branch from before the squash, so I still have that feature branch history when I need it later (I rarely do) so I see no reason to clutter up history on main with the yucky feature branch history. And if I tend toward smaller PRs, which is so much nicer for dev velocity anyway, even squashed commits can be granular enough for things like bisect, blame, and so on.


Right, but they are referring to configuration on a GitHub repository that can make squash merge automatic for all pull request merges.

e.g. When clicking the big green "Merge pull request" button, it will automatically squash and merge the PR branch in.

So then I don't need to remind or wait for contributors to do a squash merge before merging in their changes. (Or worse, forget to squash merge and then I need to fix up main).


Rebasing replays your commits on top of the current main branch, as if you’d just created your branch today. The result is a clean, linear history that’s easier to review and bisect when tracking down bugs.

The article discusses why contributors should rebase their feature branches (pull request).

The reason they give is for clean git history on main.

The more important reason is ensure the PR branch actually works if merged into current main. If I add my change onto main, does it then build, pass all tests, etc? What if my PR branch is old, and new commits have been added onto main that I don't have in my PR branch? Then I can merge and break main. That's why you need to update your PR branch to include the newer commits from main (and the "update" could be a rebase or a merge from main or possibly something else).

The downside of requiring contributors to rebase their PR branch is (1) people are confused about rebase and (2) if your repository has many contributors and frequent merges into main, then contributors will need to frequently rebase their PR branch, and each rebase their PR checks need to re-run, which can be time consuming.

My preference with Github is to squash merge into main[1] to keep clean git history on main. And to use merge queue[2], which effectively creates a temp branch of main+PR, runs your CI checks, and then the PR merge succeeds into main only if checks pass on the temp branch. This approach keeps super clean history on main, where every commit includes a specific PR number, and more importantly minimizes friction for contributors by reducing frequent PR rebases on large/busy repos. And it ensures main is never broken (as far as your CI checks can catch issues). There's also basically no downside for very small repos either.

1. https://docs.github.com/en/repositories/configuring-branches...

2. https://docs.github.com/en/repositories/configuring-branches...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: