Sometimes you want an artistic vase that captures some essential element of beauty, culture, or emotion.
Sometimes you want a utilitarian teapot to reliably pour a cup of tea.
The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.
Both have are desirable, for different people, for different purposes.
With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.
> Sometimes you want a utilitarian teapot to reliably pour a cup of tea.
If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.
It's more complicated and skill-intensive than it looks.
In both realms, making an artistic vase can be simpler than a simple utilitarian tool.
AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.
There is a whole range of variants in between those two "artistic vs utilitarian" points. Additionally, there is a ton of variance around "artistic" vs "utilitarian".
Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).
Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.
You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.
If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.
I've seen an interesting behavior in India. If I ask someone on the street for directions, they will always give me an answer, even if they don't know. If they don't know, they'll make something up.
This was strange. I asked a lot of Indian people about it and they said that it has to do with "saving face". Saying "I don't know" is a disgraceful thing. So if someone does not know the answer, they make something up instead.
Have you seen this?
This behavior appears in software projects as well. It's difficult to work like this.
No, but I have noticed that somehow it's hard for them to say "no". This is impolite apparently. So you ask: "Can you do this before friday" and they say yes and then don't do it at all. Which of course is a lot less polite and causes a lot of friction.
However this was a thing 10-15 years ago. Lately I've not seen that.
> Which of course is a lot less polite and causes a lot of friction.
Most cultures have this, but it goes mostly unnoticed from the inside because one can read between the lines. "How are you?" can be asked just to be polite, and can cause friction when answered truthfully (rather than just politely, as the cultural dance requires). An Eastern European may not appreciate the insincerity of such a question.
I use "about the same", thanks to a friend. I love the reactions (from Americans, where everyone is expected is to say "Great" or "Good" or something similarly positive).
Is that just a reflex response though? I would expect people to be more deliberate in their interactions with medical professionals, but I can easily imagine hearing “How are you?” and my brain goes on autopilot.
Yeah, this is something I had to learn over my teenage/early 20s years. "How are you?" Is often not a question but just a generic greeting like "Hello" or "Nice to meet you". Sometimes it is though, but that's just one of the many examples of unwritten rules about how to tell whether someone literally means what they're saying or if there's a better way to interpret it.
Having only lived in the US, I don't have nearly enough firsthand experience with other cultures for me to be the one to comment on them, but I suspect that every culture has some things like this where the actual intent of the communication isn't direct. I suspect that if people in tech were asked to identify which cultures they considered to be the most direct in their communication, American culture probably wouldn't be ranked first. Generally the stereotypes of other cultures that are perceived as more direct get described in more pejorative terms like "blunt" though.
> that’s an example of a fixed answer to a fixed question.
That's my whole point! The expected answer seems pretty obvious to you, given the context, doesn't it? Why then are you surprised that a different culture has an equally obvious (to them) fixed answer ("Yes") to any question asked by someone with power/authority to their lesser? Both depend on mutual learned cultural awareness, and can fail spectacularly in cross-cultural contexts.
Edit: my regional favorite is "We should meet for lunch some time" which just means "I'm heading out now", but you have to decode the meaning from the nature of the relationship, passive voice usage, and the lack of temporal specificity.
similarly, in the west, when your boss takes you to HR for an honest and open discussion, it's not really an honest and open discussion. normies know this instinctively. I didn't.
A fairly common conversation starter for eastern europeans is "how are you doing?", "it sucks", "yeah it does, doesn't it?". The American style of being all flowers and butterflies can indeed be perceived as lying.
To be fair we Americans also poke fun at this. Here in the South I usually say, “Can’t complain,” and most people will finish the adage, “and it wouldn’t do any good if you did.”
It is fine if it is not lying but so often you ask how are you and get the flowers and butterflies response but when you sit 10 min more they start explaining how miserable they are: as a Dutchman, I do tend to ask why they said how great and excellent they were just minutes ago. And no, it is not just something you do out of politeness: if you just canned response to one thing, how do I know you don't have canned responses to many more things which are in fact lies at this point in time? I don't want to talk with Zendesk, I want to chat with someone I just met in the pub.
It isn’t lying, it is what we consider an appropriate level of sharing. We don’t tend to want to put our burdens on people who may not be interested in hearing it.
After talking to many folks from US I appreciate that. It's like going through the original `SYN ; SYN ACK ; ACK` flow. You are just establishing the conversation, but then the content can start flowing after if there's interest.
My experience is the same, to put it charitable a lot of people from that culture are often eager to please. I think about this a lot when I hear about billionaires like Elon Musk wanting more immigration from India specifically. I think this cultural trait often serves them well in western corporate contexts, despite the frustration it causes their coworkers.
> I've seen an interesting behavior in India. If I ask someone on the street for directions, they will always give me an answer, even if they don't know. If they don't know, they'll make something up.
Isn’t this the precise failure pattern that everybody shits on LLMs for?
Only on surface. The difference is the LLM doesn't know it doesn't know. An LLM provides the best solition it has regardless of whether that solution is in any way fit for purpose.
> This was strange. I asked a lot of Indian people about it and they said that it has to do with "saving face". Saying "I don't know" is a disgraceful thing.
I've recently learned that this particular type of "saving face" has a name: "izzat". Look that up if you want to know more.
A lot of the stuff written on "izzat" is questionable or wrong, but it is true that India has a collective concept of saving face. This can be an adjustment even if you're used to the East Asian concept of saving face.
I'm not sure how to write that better, but the way you worded that made me suspect it was NSFW and I hesitated, but eventually decided I'd risk it. At least everything I found was work safe, and I learned a lot. I encourage everyone else who hasn't heard the word to look it up.
I’ve seen this with some of my Indian colleagues, though definitely not all. In fact, most are more than eager to disagree with me :D (even though I’m their superior)
I moderate an airline subreddit, and it's interesting that many of the lazy or entitled-sounding questions (e.g. "can I get compensation for this?") come from people flying to/from Indian cities.
Honestly that's just the massive population talking. There really isn't a "Hindi web" for India unlike for the Chinese, so we all come to roost in the WWW. Hence you'll get bad questions like these but you'll also get YouTube videos on obscure engineering and science topics, which I think is a fair deal.
The Chinese web is on similar lines, although there is a lot more country bashing, especially against Indians and Americans. But nevertheless just the same.
At least none of these come nowhere near to the brainrot that is the Arabic web.
1 billion internet users, some of whom may have an inkling of how to speak in English.
My father's former property groundskeeper, a daily wage labourer, could speak poor English but he could string a few words together and understand the basic gist of a Hollywood movie, even without much of an English education. Imagine 100s of millions of those people and there's your answer.
That's not been my experience living in the UK. Whe I've asked for directions people either give correct ones (as far as I remember) or say they don't know. When people ask me and I don't know, I say I don't know.
> This behavior appears in software projects as well. It's difficult to work like this.
I have seen that across just about every culture in the software engineering world.
And not just in the 'business' itself. I still remember the argument I had with an Infosec guy where he absolutely insisted that every Jeep had AWD or 4WD from the factory, Even naming ones that didn't did nothing until I more or less passively aggressively sent him wikipedia links to a few vehicles.
At which point he proceeded to claim "No I said it was always a standard option" ... To be clear this whole argument started because someone asked why I swore by Subarus and mentioned 'Every US Model but the BRZ has AWD standard' but Heep owners gotta have false pride, idk.
People do weird shit with imposter syndrome sometimes, IDK.
According to Hal Roach, the Irish do this too, because they don't want to disappoint you. I haven't asked for a lot of directions in Ireland, but I can imagine this is true, or that they will just keep you chatting and see if you forget about your question.
This reminds me of the time when I got lost when visiting LA about 20 year ago. Asked some guy on the street for help. He gave me directions as he was smirking at me. Turns out he pointed me in the opposite direction from where I was going to and most likely he was just being a dick.
First of all, the limiting factor is my attention. Even when I really tried to do this in parallel I could not meaningfully run more than three sessions.
I realized that some of the attention is devoted to thinking how multiple agents could step on each others toes as and end up creating merge conflicts when it came time to merge.
I have multiple projects and if I feel I have capacity to do parallel coding, I do it in unrelated code bases.
Sometimes I do super async work - as in one interaction per hour or so. I do it while walking, waiting in line, or between meetings. I found web based Codex to be decent for this. I guess it’s in parallel with my Claude Code sessions.
Healthy young people are less likely to buy insurance than sick older people. But if only sick older people buy insurance the payouts per insured are going to be higher. That in turn causes high premiums. Insurance works if everyone buys in, pays while they are young and relatively healthy, and gets paid healthcare when they are older and sicker.
If you “game” it, it breaks the whole system.
Now some of you might be thinking “why should a young and healthy guy like myself subsidize the old sick people?” The answer is that you will also get old.
What you are describing isn't really private insurance though, its a privately run socialized healthcare system. There's nothing wrong with that, it simply isn't insurance.
You're right. However, all insurance needs to get more in premiums than it pays out in claims in order to be viable. The details will differ about whether there is some kind of bias for certain people to pay more and claim less. With socialized healthcare, the coverage is just much broader and there is less room for "gaming" the system.
I love the name. I’ve been doing this for a while. My name for it is super boring “todo archive” but I am renaming it next year.
Keeping an archive of things I’ve done is great for my mental health. Occasionally, I even look search through it and the associated notes and fish out something useful.
I wonder what is the difference between the two groups as far as the rate of finding employment. Might be that the act of looking for a job is stressful.
If the goal is to get people back to work, it might not make sense to optimize just for better mental health.
And note that actually getting work removed the controlled money. That's quite a disincentive to actually finding work. Welfare systems very often end up being a trap because of this--people can't afford to succeed because they'll hit some tripwire that makes them worse off.
I'd like to see welfare systems and tax codes modified with a rule that no situation can cause more than a 50% marginal "tax". (Which would mean many cutoffs in the tax code would effectively be replaced with phaseouts even if Congress didn't specifically fix them.)
> Welfare systems very often end up being a trap because of this--people can't afford to succeed because they'll hit some tripwire that makes them worse off.
This is very much a problem in the US. I've lived it myself before I was making 6+ figures, and I've known many people that lived through it as well.
I had a higher quality of life working very part time minimum wage + benefits (SNAP, free healthcare, subsidized housing) than I did making 50k/year.
Most on welfare like that, you actually end up with a much worse quality of life the moment you make a little more money or find a better job and lose your benefits. There's far too big of a gap between "needs assistance" and "makes enough money to have the same or better quality of life as being on benefits" so for most, you just purposely work less or work lower paying jobs in order to keep collecting benefits because to do otherwise means you are worse off.
For someone who has subsidized housing, free healthcare, and SNAP, why would purposefully lose all of that, but still remain poor, just because now you work 40 hours/week instead of 20. Unless you can make a huge jump (say, go from minimum wage up to $75k+/year immediately), don't bother trying to get off welfare, it won't do you any good.
The tax code (at least in the US, YMMV in other countries) is already progressive. Making more will never have you taking home less.
However, most welfare systems have hard cutoffs. If you get $500 in SNAP a month and make $500 a month, you have $1000 to last a month. And if the cutoff is $501, making that one extra dollar is going to cost you $499.
What would be more difficult, also gameable, but better all around is to have benefits adjusted to get people to a baseline.
Say the poverty level is $1000 a month. You get $1000 - X, where X is how much you made in that month.
> However, most welfare systems have hard cutoffs.
Most welfare systems have phased benefit reductions (there is a point where the benefit hits zero, which can be viewed as a hard cutoff, but it doesn't go "full benefit up to the line and then zero at the line" in most cases, though there are exceptions.
> If you get $500 in SNAP a month and make $500 a month, you have $1000 to last a month. And if the cutoff is $501, making that one extra dollar is going to cost you $499.
If the SNAP cutoff applicable to your situation was $501, then your actual benefit at $500 would be $24 (the minimum SNAP benefit), not $500. Because SNAP does a $0.30 per dollar of income clawback until the minimum benefit is reached, and then stays at the minimum benefit until the eligibility limit income is reached.
There is a cliff still, but its a lot smaller of a cliff (for SNAP alone) than you are painting.
I'm talking about the US. You're looking at the simple version: just income. But the tax code is more than income--there are many benefits that go away at a specific AGI--if you use that benefit you can have an effective tax rate of over 100% in the realm just above the fence.
And your system doesn't work. Working is almost never truly free, you have to spend some money to make money. And you don't have that time available to do other things that stretch the money you have.
> Making more will never have you taking home less.
There are corner cases where making more can leave you with less outside of welfare. Tripping into the next IRMAA bucket is one simple to understand one.
Medicare is a form of welfare, just branded differently. It's a means-tested benefit funded in a pay-as-you-go manner via income tax just like any other. The means are just different amongst various programs.
Are there any actual cases of making more earned income via a regular job worse than taking that extra dollar of pay? I'm guessing a few very rare corner-cases exist, but I can't immediately think of any. I imagine they would be somewhere in the neighborhood of the EITC or AMT type things.
Wouldnt it be easier to just give everybody $1000/mo but then use the yearly tax bill to get high earners to pay it back rather than having graduated payouts?
That ensures that everybody gets prompt payouts and feels that there’s a safety net, but no complex means testing and bureaucracy. And the same net cost, although the timing of the payouts will shift a bit
But SNAP doesn't have a hard cutoff. There are welfare programs that do, but SNAP doesn't.
School lunch programs have two phases, free, and reduced. Medicaid varies a bit by state, but transitions to Obamacare subsidies. Hitting the cutoff for medicaid can really hurt, though, if your employer doesn't provide healthcare benefits.
>Welfare systems very often end up being a trap because of this--people can't afford to succeed because they'll hit some tripwire that makes them worse off.
Or maybe they consider getting money you can live on without working to be a success. I know I would.
You're probably not aware that €560 is subsistence money in Finland. Eat noodles every day, sell your car, keep indoor temperature at 18 C to save electricity, then maybe you have enough to pay rent. The idea that people in that situation needs to be kicked even harder to "get of their lazy asses" is cruel.
No one needs to be kicked to do anything. Welfare payments are needed for various reasons. Some people are unable to work for various reasons and need welfare to live. Some people find themselves in temporary situations where the money helps them during hard job transitions or difficult periods in life. It’s important to give people incentives to help them achieve what will make their lives truly better. Sometimes, free money removes those incentives and temporary situations become permanent. I hope you don’t perceive all incentives to encourage constructive behavior as “kicking people.”
It's clearer if you deconstruct the conversation about Jira and then think about the washing hair and shampoo comment. It's a stretch, but when you see it is should make sense.
I ask my team to clarify requirements better. They say that they already have Jira. It's as if they were implying that the presence of a tool (Jira) should be enough to provide clear stories. But it's not about the tool. It's about them not using the tool properly but pointing at the tool (or process) as an excuse.
I ask my son to wash his hair. He says there is shampoo in the shower. It's as if the presence of the shampoo implies that his hair should be clean. It's not about the lack of tooling, but about the fact that he did not wash his hair with the tool that he had available.
People often blame tooling or methodology, but most often its that they don't know how to use the tooling or methodology well. They will say things like "if we only used X our problems would go away." Most likely, they won't.
I posted a lazy comment earlier because I did not have time to type it out. Apologies.
I see, thanks. All analogies are flawed and that's a fact of life but your clarification made it crystal clear.
RE: your work, I would probably fight hard to reduce all the bureaucracy-inviting tools (like Jira). That removes the excuse "we have tools already, why don't we have clear stories?" -- though I am aware that for many people this fight would cost them their job.
He's saying that there's shampoo in the shower but he didn't use it (implied) -- however, the question wasn't about the presence of shampoo in the shower.
Aha, but that's not a rebuttal at all. The son is just stating a rather very loosely connected fact. If I was the father I'd immediately respond with "Yeah, and?".
Love the title. Reminds me of one of my favorite quotes: "The single biggest problem in communication is the illusion that it has taken place."
This is what user stories were supposed to accomplish in a more lightweight way.
The whole scrum DoR (definition of ready) status means that something is clear and ready for development.
Stories are written and are sent to the engineering team for clarification. This is where the comments are supposed to come in. There is a clear step for clarification of stories, before the story is ready for development. It gets marked as DoR when that clarification is done.
It does not matter if you use RFCs, user stories, or hallway conversations as your process of clarifying work. If it does not work, it does not work.
Any way you can get your teams to communicate more clearly is great.
> "The single biggest problem in communication is the illusion that it has taken place."
Love this! Corollary: when you have too many meetings, that’s easy to notice. When you don’t have enough meetings, that’s harder to notice.
I’m in the process of carefully adding meetings and process to our small team of 6 (we had a PM from a large company drop in a few years ago and haphazardly add a bunch of process, and it didn’t really help).
We’re fully remote and have a daily huddle and, on average, 1 hour of meetings a week. It turns out this isn’t enough. So far, each bit of communication we’ve added has resulted in better outcomes and higher morale because we feel more like a team.
NOTE: People pointed out that it's $800 billion to cover interest, not $8 billion, as I wrote below. My mistake. That adds 2 more zeroes to all figures, which makes it a lot more crazy. Original comment below...
$8 billion / US adult adult population of of 270 million comes out to about $3000 per adult per year. That's only to cover cost of interest, let alone other costs and profits.
That sounds crazy, but let's think about it...
- How much does an average American spend on a car and car-related expenses? If AI becomes as big as "cars", then this number is not as nuts.
- These firms will target the global market, not US only, so number of adults is 20x, and the average required spend per adult per year becomes $150.
- Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI. The total spend per targetable adult per year becomes closer to $500.
- The $8 billion in interest is on the total investment by all AI firms. All companies will not succeed. Let's say that the one that will succeed will spend 1/4 of that. So that's $2 billion dollar per year, and roughly $125 per adult per year.
- Triple that number to factor in other costs and profits and that company needs to get $500 in sales per targetable adult per year.
People spend more than that on each of these: smoking, booze, cars, TV. If AI can penetrate as deep as the above things did, it's not as crazy of an investment as it looks. It's one hell of a bet though.
right. My goof. That adds two more zeroes across all the math. More crazy, but I think in the realm of "maybe, if we squint hard." But my eyes are hurting from squinting that hard, so I agree that it's just crazy.
You're saying $8 billion to cover interest, another commenter said 80, but the actual article says ""$8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest". Eight HUNDRED billion. Where does the eight come from, from 90% of these companies failing to make a return? If a few AI companies survive and thrive (which tbh, sure, why not?) then we're still gonna fall face down into concrete.
I think it's the realm of maybe in Silicon Valley. That's 5000 dollars. Look at this statement:
> Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI
2/3 of the world's adult population is between 15 and 65 (roughly: 'working age'), so that's 50% of the working world that is capable of using AI with those numbers. India's GDP per capita is 2750USD, and now the price tag is even higher than 5k.
I don't know how to say this well, so I'll just blurt it out: I feel like I'm being quite aggressive, but I don't blame you or expect you to defend your statements or anything, though of course I'll read what you've got to say.
Sometimes you want a utilitarian teapot to reliably pour a cup of tea.
The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.
Both have are desirable, for different people, for different purposes.
With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.
reply