Hacker Newsnew | past | comments | ask | show | jobs | submit | stego-tech's commentslogin

IT perspective here. Simon hits the nail on the head as to what I'm genuinely looking forward to:

> How do you clone the important parts of Okta, Jira, Slack and more? With coding agents!

This is what's going to gut-punch most SaaS companies repeatedly over the next decade, even if this whole build-out ultimately collapses in on itself (which I expect it to). The era of bespoke consultants for SaaS product suites to handle configuration and integrations, while not gone, are certainly under threat by LLMs that can ingest user requirements and produce functional code to do a similar thing at a fraction of the price.

What a lot of folks miss is that in enterprise-land, we only need the integration once. Once we have an integration, it basically exists with minimal if any changes until one side of the integration dies. Code fails a security audit? We can either spool up the agents again briefly to fix it, or just isolate it in a security domain like the glut of WinXP and Win7 boxen rotting out there on assembly lines and factory floors.

This is why SaaS stocks have been hammered this week. It's not that investors genuinely expect huge players to go bankrupt due to AI so much as they know the era of infinite growth is over. It's also why big AI companies are rushing IPOs even as data center builds stall: we're officially in a world where a locally-run model - not even an Agent, just a model in LM Studio on the Corporate Laptop - can produce sufficient code for a growing number of product integrations without any engineer having to look through yet another set of API documentation. As agentic orchestration trickles down to homelabs and private servers on smaller, leaner, and more efficient hardware, that capability is only going to increase, threatening profits of subscription models and large AI companies. Again, why bother ponying up for a recurring subscription after the work is completed?

For full-fledged software, there's genuine benefit to be had with human intervention and creativity; for the multitude of integrations and pipelines that were previously farmed out to pricey consultants, LLMs will more than suffice for all but the biggest or most complex situations.


“API Glue” is what I’ve called it since forever

Stuff comes in from an API goes out to a different API.

With a semi-decent agent I can build what took me a week or two in hours just because it can iterate the solution faster than any human can type.

A new field in the API could’ve been a two day ordeal of patching it through umpteen layers of enterprise frameworks. Now I can just tell Claude to add it, it’ll do it up to the database in minutes - and update the tests at the same time.


And because these are all APIs, we can brute-force it with read-only operations with minimal review times. If the read works, the write almost always will, and then it's just a matter of reading and documenting the integration before testing it in dev or staging.

So much of enterprise IT nowadays is spent hammering or needling vendors for basic API documentation so we can write a one-off that hooks DB1 into ServiceNow that's also pulling from NewRelic just to do ITAM. Consultants would salivate over such a basic integration because it'd be their yearly salary over a three month project.

Now we can do this ourselves with an LLM in a single sprint.

That's a Pandora's Box moment right there.


This is another facet of the fierce opposition to AI by a swath of the population: it’s quite literally destroying the last bit of enjoyment we could wring from existence in the form of hobbies funded through normal employment.

Think of the PC gamers, who first dealt with COVID supply shocks, followed by crypto making GPUs scarce and untenable, then GPU makers raising prices and narrowing inventory to only the highest-end SKUs, only to outright abandon them entirely for AI - which then also consumed their RAM and SSDs. A hobby that used to be enjoyed by folks on even a modest budget is now a theft risk given the insane resale priced of parts on the second-hand market due to scarcity.

And that extends to others as well. The swaths of folks who made freelance or commission artistry work through Patreons and conventions and the like are suddenly struggling as customers and companies spew out AI slop using their work and without compensation. Tech workers, previously the wealthy patron of artisans and communities, are now being laid off en masse for AI CapEx buildouts and share pumps as investors get cold feet about what these systems are actually doing to the economy at large (definite bad, questionable good, uncertain futures).

Late stage capitalism’s sole respite was consumerism, and we can’t even do that anymore thanks to AI gobbling up all the resources and capital. It’s no wonder people are pissed at AI boosters trying to say this is a miracle technology that’ll lift everyone up: it’s already kicking people down, and nobody actually wants to admit or address that lest their investments be disrupted to protect humans.


I think this started a lot earlier actually. A few generations back, many people played an instrument, or at least could sing. It didn't matter that none of them was a Mozart, because they didn't had to be. For making music or singing together in a family or a friend group, it was wholly sufficient to be just good, not necessarily great.

But when everyone has access to recordings of the world's best musicians at all times, why listen to uncle Harry's shoddy guitar play? Why sing Christmas songs together when you can put on the Sinatra Christmas jazz playlist on Spotify?


That’s definitely part of it as well, this sort of general distillation into a smaller and smaller pool of content or objects or goods that cost ever more money.

Like how most of the royalties Spotify pays out are for older catalogue stuff from “classic” artists, rather than new bands. Or how historical libraries of movies and films are constantly either up for grabs (for prestige) or being pushed off platforms due to their older/more costly royalty agreements.

With AI though, it’s the actual, tangible consumption of physical goods being threatened, which many companies involved in AI may argue is exactly the point: that everyone should be renters rather than consumers, and by making ownership expensive through cost and scarcity alike, you naturally drive more folks towards subscriptions for what used to be commodities (music, movies, games, compute, vehicles, creativity tools, TCGs, you name it).

It’s damn depressing.


> why listen to uncle Harry's shoddy guitar play?

Uncle Harry is not playing guitar: https://www.youtube.com/watch?v=VXzz8o1m5bM


THIS! Instrument-playing capability of a social environment: By today, I know only of one person playing a piano regularly in his club, he is the only one I know. When I was young, you had some basic instrument introduction in music classes at school - I do not know if these still exist today.

Regarding singing - I do not know a single perso who can "somehow" sing at least a little bit.

The society is loosing these capacities.


"Comparison is the thief of joy" as they say. Some dude has the world's highest score in Pac-Man in the Guinness Book Of World Records. It doesn't mean that I can't play Pac-Man to beat my own personal high score and enjoy the process because the game is fun in it's own right.

That's sure true in theory, but given the prevalence of status symbols, many people thrive on comparing themselves to others. I'd argue society was better off when the only people you could reasonably compare yourself to were the three neighbours down the street (out of which only one would be into Pac-Man), not the world's ten thousand best players showing off only their best streaks on your Instagram feed all day.

Apologies for being glib but I never thought I’d see a sincere “think of the gamers” comment.

Your whole post is a bit vague and naive. If people enjoyed real art more than AI art, then the market will decide it. If they don’t then we should not be making people enjoy what they don’t.


The market might not be able to tell the difference. It takes effort to count fingers and toes in art. Part of the problem is also: So many companies are doing it now, that it doesn't seem effective to call people/companies out for the use of AI slop.

A big part of the problem is also: AI art is usually not labelled as such. The market can not make an informed decision.


If people can’t tell the difference, perhaps it doesn’t matter

Think of the PC gamers

The battlecry of the new revolution?


IT dinosaur here, who has run and engineered the entire spectrum over the course of my career.

Everything is a trade-off. Every tool has its purpose. There is no "right way" to build your infrastructure, only a right way for you.

In my subjective experience, the trade-offs are generally along these lines:

* Platform as a Service (Vercel, AWS Lambda, Azure Functions, basically anything where you give it your code and it "just works"): great for startups, orgs with minimal talent, and those with deep pockets for inevitable overruns. Maximum convenience means maximum cost. Excellent for weird customer one-offs you can bill for (and slap a 50% margin on top). Trade-off is that everything is abstracted away, making troubleshooting underlying infrastructure issues nigh impossible; also that people forget these things exist until the customer has long since stopped paying for them or a nasty bill arrives.

* Infrastructure as a Service (AWS, GCP, Azure, Vultr, etc; commonly called the "Public Cloud"): great for orgs with modest technical talent but limited budgets or infrastructure that's highly variable (scales up and down frequently). Also excellent for everything customer-facing, like load balancers, frontends, websites, you name it. If you can invoice someone else for it, putting it in here makes a lot of sense. Trade-off is that this isn't yours, it'll never be yours, you'll be renting it forever from someone else who charges you a pretty penny and can cut you off or raise prices anytime they like.

* Managed Service/Hosting Providers (e.g., ye olde Rackspace): you don't own the hardware, but you're also not paying the premium for infrastructure orchestrators. As close to bare metal as you can get without paying for actual servers. Excellent for short-term "testing" of PoCs before committing CapEx, or for modest infrastructure needs that aren't likely to change substantially enough to warrant a shift either on-prem or off to the cloud. You'll need more talent though, and you're ultimately still renting the illusion of sovereignty from someone else in perpetuity.

* Bare Metal, be it colocation or on-premises: you own it, you decide what to do with it, and nobody can stop you. The flip side is you have to bootstrap everything yourself, which can be a PITA depending on what you actually want - or what your stakeholders demand you offer. Running VMs? Easy-peasy. Bare metal K8s clusters? I mean, it can be done, but I'd personally rather chew glass than go without a managed control plane somewhere. CapEx is insane right now (thanks, AI!), but TCO is still measured in two to three years before you're saving more than you'd have spent on comparable infrastructure elsewhere, even with savings plans. Talent needs are highly variable - a generalist or two can get you 80% to basic AWS functionality with something like Nutanix or VCF (even with fancy stuff like DBaaS), but anything cutting edge is going to need more headcount than a comparable IaaS build. God help you if you opt for a Microsoft stack, as any on-prem savings are likely to evaporate at your next True-Up.

In my experience, companies have bought into the public cloud/IaaS because they thought it'd save them money versus the talent needed for on-prem; to be fair, back when every enterprise absolutely needed a network team and a DB team and a systems team and a datacenter team, this was technically correct. Nowadays, most organizational needs can be handled with a modest team of generalists or a highly competent generalist and one or two specialists for specific needs (e.g., a K8s engineer and a network engineer); modern software and operating systems make managing even huge orgs a comparable breeze, especially if you're running containers or appliances instead of bespoke VMs.

As more orgs like Comma or Basecamp look critically at their infrastructure needs versus their spend, or they seriously reflect on the limited sovereignty they have by outsourcing everything to US Tech companies, I expect workloads and infrastructure to become substantially more diversified than the current AWS/GCP/Azure trifecta.


This is not a new or novel idea. I proposed such a thing at the start of my career in tech, and repeatedly propose it when I feel I have ears willing to listen.

The problem - and I do mean the problem, the only problem - is the threat this poses to power dynamics in the organization.

Compliance people do not benefit from their outputs being readily searchable and indexed like this, because it means there’s less need for them. Executives and leaders do not benefit from this, because they’re increasingly hired specifically because of their knowledge of various compliance frameworks. The people whose power derives from this knowledge and expertise are overwhelmingly the people in charge of the company and its operations, and they benefit more from blocking it outright than implementing it.

Don’t get me wrong, I love this idea. I love transparency in organizations, because it makes it infinitely easier to identify and remediate problems beyond silo walls. It’s peak cooperation, and I am all for it.

I also do not see it happening at scale while competition is considered the default operating mode of society at large. That said, I would love to work for an organization placing importance on this degree of internal cooperation. I suspect I’d thrive there.


What prevents people from not doing what the policy says? Since neither "paper doc" policy nor "code policy" actually constrains humans from trying to exploit or work around the system, oversight and compliance still seem like messy human functions. Does this just become "more structured compliance documentation"? Which sounds nice, but not dramatically different.

And on the creation side, what prevents political fights over what goes into the "code policy" of exactly the same sort that lead to compromises or oddities in paper policies?


It constrains power and makes decisions auditable and holds those with power accountable to guidelines. Management doesn't like this paradigm for the very same reasons big tech platforms make conduct and moderation guidelines vague and nonspecific. It frees them up to remove, penalize, fire, and otherwise exert power for reasons they can't explicitly justify.

It's exactly the same paradigm the EU and countries around the world are avoiding - denying due process in things like freedom of press and expression, because they feel it allows them flexibility in suppressing and "managing" speech, people, and groups they deem problematic.

Having an explicit rule of law constrains the exercise of power. Those looking to wield power will never like that.


> The problem - and I do mean the problem, the only problem - is the threat this poses to power dynamics in the organization.

And expertise, to be fair. Documentation as code is what we in the software industry call testing/type systems. The vast majority of developers cannot even write a good test for their code (if they are willing to even try at all), let alone their eyes completely glazing over if you ask them to write, like, an Rocq proof. And that's people who live and die by code, not business people who are layers removed from the activity.


> I also do not see it happening at scale while competition is considered the default operating mode of society at large.

You don’t even need competition between people and orgs, just between solutions that work more-or-less equally but come with different second-order tradeoffs. Consider two approaches that solve a company’s problem equally but create different amounts of work for different people in the organization. Which solution to choose? Who gets to decide, based on what criteria? As soon as even a little scale creeps in this is inescapable.


> That said, I would love to work for an organization placing importance on this degree of internal cooperation. I suspect I’d thrive there.

I've been looking for such a org my entire career but recently resigned myself[1] to the fact it'll probably not happen unless I come into a situation when I can create it myself.

---

[1] <https://blog.webb.page/WM-081>


When/If you do want to create one yourself, there are a handful of tech cooperatives out there for inspiration: https://github.com/hng/tech-coops

> This is not a new or novel idea

Yep https://x.com/fduran/status/1134283398594387969


Any books about workgroup power dynamics ? i'm fascinated (morbidly in a way) by that

Harvard Business Review's Office Politics. Or any intro Industrial Psychology textbook.

Keith Johnstone: Impro

Is it ISBN-13: 978-0878301171?

Or ASIN B01K2J06SY


The first one is the classic. Don't know about the second one.

Power dynamics have been extensively investigated by the "Johnstone school" of improv, because humans are (mostly preconsciously i.e. usually are not but can become conscious about it) interested in power dynamics -- especially in situations where power balance is switching -- so this is the key if you want improvise acts that feel realistic and capture the audience attention.

To really understand it, I would recommend taking some improv classes that are based on Johnstone's teachings. But the book will give you the idea.


I tried digging around keith johnstone but i could only find theatrical improv which, unless it flies above my head, had very few to do with the workplace dynamics of real jobs. Unless the concept is to insist on the fact that adult life is just a play and treat your day like a space of randomness to disrupt the established roles ?

"Compliance" as we know it today is going away

Care to elaborate?

Just a genuinely excellent essay written to a broader technical audience than simply those software engineers who live in the guts of databases optimizing hyper-specific edge-cases (and no disrespect to you amazingly talented people, but man your essays can be very chewy reads sometimes). I hope the OP’s got some caching ready, because this is going to get shared.

Rewrote The Guardian’s headline for clarity. Article suggests that economic forecasts and models around climate change fail to sufficiently model the impacts of shocks, like severe weather events, and that far more substantial negative economic impact is incredibly likely as a result.

I like the original title, „ Flawed economic models mean climate crisis could crash global economy, experts warn“. Why edit it?

I was going to submit it with the title "Flawed economic models mean climate crisis could crash global economy" as the "experts warn" takes it just over the character limit

Nevermind that society dictates everyone must work to survive by default.

Nevermind that work has become significantly more precarious, the cost of living higher, the wages lower.

Ageism is just a dick move in general. It's gotten to the point where job candidates in their 30s and early 40s are dropping work history and education to appear as if they're in their 20s to potential employers - and even considering plastic surgery[1]. It's gotten completely out of control, but I'm quite glad to see more of my peers and younger colleagues taking a firm stance against it in any form.

As long as the work gets done, everything else is irrelevant. As long as the idea is successful, it doesn't matter the age of the person who surfaced it.

Stereotyping just gets your ass into legal trouble, and the easiest solution is to just not do it in the first place.

[1]https://www.businessinsider.com/resume-botox-lying-millennia...


> Ageism is just a dick move in general.

It's also self-defeating. Yes, there are greybeards who are stuck in their ways and refuse to learn anything new. But more often than not, the greybeards are super good team members in ways that the younger employees can't hope to compete with, because all that experience has taught them a ton about what works and what doesn't. But rather than trying to harness that valuable knowledge, companies shoot themselves in the foot by ignoring it. It's ridiculous.


This only works assuming somebody cares what works and what doesn't. Often nobody does. Most organizations do not tend to reward the good decision that made everything easy. They reward things that look hard and projects that take forever as long as they can somehow be spun as successes.

I've been pretty successful but my advice is almost always ignored. Where it matters is the stuff I do or the stuff I have control over (e.g. teams I lead).


This entire comment reverberates deep in my bones of late, and I sympathize with you on struggling to find recognition for just doing good work or having good ideas.

A company I worked at years ago had a devious and exploitative approach to market domination: hiring older super experienced workers and plugging them into teams with young over-eager programmers…

People with deep industry knowledge who were trained up to be decent programmers (middling, but serious, consistent, and quality focused), setting the direction. Those domain experts were working with young dumbasses who would burn 60+ hour weeks to make sales deadlines and keep current with ever shifting platform tech that breaks all the time. SMEs baked into the core development loop, DDD-made-flesh essentially, with cheaper more junior devs supporting scale for less money and maximizing the SMEs vision/contributions.

It’s an obvious and effective strategy. I’d speculate the management skills it takes to setup are what keep it as a rarity.


Try telling this to the baby faced MBAs that run the org.

> Nevermind that society dictates everyone must work to survive by default.

What is the alternative ?


A little over 100 years ago women were only 20% of the labor force. [1] Which is to say, most women did not participate in wage employment.

Now they're ~47%. Which is great! But it also hints that society doesn't need most of the labor for the system to still function.

[1] https://www.dol.gov/agencies/wb/about/history


Rather like subsistence farming, everyone got out of housework as soon as possible and into a far less onerous office.

People didn't get out of housework. They're doing housework and office work.

Housework isn’t as onerous now as it was 100 years ago, though.

if I could retire, raise my kids and homeschool them, and do housework instead I would in a second. Nothing is more interesting, more fulfilling, and more challenging then raising the next generation.

"Work" does not exclusively mean "work full time for a wage".

yeah but carrying and raising kids?

You appear to be asking a trick question, disingenuously.

There's a vast continuum between grossly-unequal homeless everywhere like many corrupt, third-world countries with masked, paramilitary disappearance squads and a large, happy middle-class paid well that can afford to buy things, take vacations, and enjoy life where corruption is lesser.


The disappearing middle class in America is becoming the upper class.

>Nevermind that society dictates everyone must work to survive by default.

How does a society that allows not working function? How does it defend itself against attacking societies?


How much of our labor is being used on activities that improve society's ability to defend itself, even in an indirect way? Isn't most of it being used to, as a schematic, serve coffee and send email?

Waste is abound, but how would you get members of the society to feel like things are "fair enough" if everyone didn't "have to work"? (they are obviously not currently, but I am referring to a more ideal society where obviously some people need to do some work)

With technological improvements, we could work far less than we do and enjoy a nice quality of life. Those excess gains were slurped up by the ruling class instead. And the 2nd question looks like American propaganda where if you don’t spend trillions and trillions on defense, the Chinese, Russian, whatever boogeyman will get you.

>With technological improvements, we could work far less than we do and enjoy a nice quality of life.

The current allocation of who does and who does not have to work and how much they have to work is suboptimal, and one of the reasons for societal decay.

>And the 2nd question looks like American propaganda where if you don’t spend trillions and trillions on defense, the Chinese, Russian, whatever boogeyman will get you.

There are multiple examples of the Chinese, Russian, Americans, and other boogeymen "getting" others in my short lifetime of 40 years.

Either way, there's highly undesirable work that has to be done for many societies, whether it be cleaning sewers, farming in humid, hot weather, and educating one's self for 30 years just to do surgery at 2AM, and clean up the fluids and mess of that surgery. If only some people have to do that and not others, it obviously brings up questions of fairness, so the fair alternative is everyone has to work for a certain quality of life (which is not currently true for those with >$x assets).


I love how myopic the knee-jerk reactions to these pleas of modesty and decency tend to be.

"If AI replaces all jobs, none of us will have to work!" Alright, let's extrapolate a bit.

Society is currently organized around working to survive. AI suddenly replaces all work. How do people survive?

"Well everything will just be free now" Will it? Will the Capitalists who built these systems and replaced that labor now suddenly just give away product? Housing? Food? Care?

"Well, we'll just have to reconfigure society!" I mean, yeah, sure, obviously that'll have to happen. Will the Capitalists who empower the current systems of governance now cede said power when work is no longer available but still necessary to survive?

"Oh, well, people need to cooperate then, speak up for themselves, take action now." I don't disagree, and I think these sorts of Op-Eds, the "AI Doomers" making pleas for decency and civility in comments sections, the artisans demanding compensation for the theft of their work, and the myriad of folks who recognize the pace we're on will get people killed - nevermind the folks highlighting AI's disproportionate use in mass surveillance, genocide, and inflicting harm on "undesirables - are doing exactly that: speaking up, taking action, and attempting proactive reform.

"But they're hindering AI!" That's the fucking point you colossal numpty. The point is to slow it down so we have time to adapt.

Like...jesus, I expected more/better from folks who digest mathematical proofs and Arxiv papers for funsies, yet so many people here just cannot think critically about complex issues that involve people other than themselves.


> I expected more/better from folks who digest mathematical proofs and Arxiv papers for funsies

Hate to break it to you, but the real hard problems are in the humanities.


As someone who flunked out of psychology and squeaked by with a D in sociology in college, this hit me right in the bones.

I've come a long way since, but yeah, more of the humanities are sorely needed in tech.


Unfortunately there is no way to slow down technological progress. The plea for mercy in the face of rapid change is heard, and now it's time to adjust instead of asking for pause.

The Guardian opinion piece is sad to me, in that the view of humanity freed from work is seen as a problem. I prefer to think that we could adjust our economic goals from 'high employment' to more wholesome metrics about mental health and happiness.


Some people seem to think that the capital owning class is somehow going to grow a conscious and start sharing their wealth with the common folks. Let's be real that has never happened willingly only by force and bloodshed have the working people ever gotten anything.

Moreover it's possible to use military power to lock things down so hard that the people don't even have a chance to revolt. For example North Korea, or any other despotic regime in the world.

If you think the musks and zuckerbergs are going to ever give anyone anything think again!

The post scarcity post work future means complete poverty for the majority of the worlds people. (So in fact the complete opposite, lots of work and lots of scarcity)


> The point is to slow it down so we have time to adapt.

...and also to try to pry it loose from the fingers of the capitalists, so we have a hope of being able to share in the prosperity it brings.


I mean, also yes, but barring that I'd still like some time to avoid having the Civilizational Tesla slam into the concrete barrier on auto-drive. Y'know, just to try.

>Like...jesus, I expected more/better from folks who digest mathematical proofs and Arxiv papers for funsies, yet so many people here just cannot think critically about complex issues that involve people other than themselves.

People who LARP about digesting mathematical proofs and Arxiv papers for funsies.


> People who LARP about digesting mathematical proofs and Arxiv papers for funsies.

Damn, call the burn ward.


Came here to say this. Nobody is saying "I want to work forever", we're saying "can we not replace work while our entire global civilization is predicated on working to survive?"

JFC, if AI replaces work wholesale right now billions of people will die before society is reshaped accordingly. More people need to think of immediate systemic impacts instead of the high-fantasy post-work future the AI folk are selling.


> if AI replaces work wholesale right now billions of people will die before society is reshaped accordingly.

Don't worry, the economists will slap the label "natural readjustment of labour supply levels" on this phenomenon, and it will make everything morally better.

Edit: in fact, we have historic precedents in e.g. Indian famines and how the British administrations talked about them and handled them [0][1]. Ah, malthusianism and undisguised racism, what a mixture. Of course, nobody counts those as part of "the millions of victims of Capitalism".

[0] Rune Møller Stahl, "The Economics of Starvation: Laissez-Faire Ideology and Famine in Colonial India": https://www.researchgate.net/publication/304189843_The_Econo...

[1] Jayant Chandel, "Political Economy of Famines in the British Empire: An Analysis of the Great Famines in India from 1876–1879" (PDF): https://www.journalofpoliticalscience.com/uploads/archives/8...


No they won't. You're missing the other half; if labor becomes free, the fruits of that labor become exceedingly cheap or even free.

See: the rapid drop in cost of food, manufactured goods, etc as automation took over those sectors. No one starved when we automated farming; they got fat.


Destroying planet in the process.. NOTHING is fucking free.. wake up.. It have costs. Energy and waste (waste needs to be reproceesed so energy again).

If you take a look how much energy is put into producing 1kcal of food, you will see that its negative. We put around x6 more that we get (diesel, syntetic fertlizers, water pumping, etc). This is because we have cheap energy, like fossil fuels.. Unfortunatelly, it have hidden costs smartasses didnt anticipate.


Keep everyone precarious and fearful, stringing together multiple bullshit jobs to make the rent, always one car repair or health scare away from the abyss.

Let owners/exploiters suppress the wages they pay workers in the name of efficiency.

Encourage owners/exploiters to relentlessly raise the prices workers pay owners/exploiters in the name of shareholder value.

Then say "there is no alternative", our civilization is predicated on systematic exploitation to survive, and if you try to change it now "everyone will die".

The owner/exploiter class is going to replace labor with capital like they always have.

The worker's enemy isn't the automation that eliminates work, the worker's enemy is is the owner/exploiter who weaponizes automation in their class war.


> Came here to say this. Nobody is saying "I want to work forever", we're saying "can we not replace work while our entire global civilization is predicated on working to survive?"

This lacks imagination. If AI ever does get good enough to replace that much human work, we'll just have to drastically change, and we will. We're much better off than humans were even 100 years ago, never mind before that. Why are people so utterly pessimistic?


It's less "gloomy" and more of a passionate "Hey, we need to rework the social order anyway, so can we maybe not set everything on fire before we do so?"

Nobody's disagreeing with your latter line, just vehemently screaming that there's no need for willful harm.


Several issues with this:

1. Economic change drives social change. The political will to create something like UBI will not exist unless there is mass unemployment.

2. Right now we need people to work, in order to create the things they need to live. It will not be possible to allow willful unemployment until machines can actually do most jobs.

3. We don't actually know if 100% automation will happen. Past automation has tended to create new jobs, and we've maintained full employment at higher wages. We should see if this happens again before we start panicking.

We just have to jump ahead with automation and figure out the rest as we go.


We’ve maintained full employment in some sense, but importantly we work fewer hours per week for fewer years of our lives than before.

> We just have to jump ahead with automation and figure out the rest as we go.

Get that accelerationist fatalism outta my face. Just because you personally have no qualms with harming others in the name of some facsimile of progress, doesn't mean it's the only option available to us. Slowing things down through regulations, through employment mandates, through pleas for cooperation instead of immediate replacement, all of those and more are ways of gradual reform and adaptation.

We're proposing letting the organism (humanity) adapt to traditional work and employment being wholesale eliminated in a society that demands work for basic survival through gradual and continuous reforms as circumstances change. Your proposal is the functional equivalent of telling an endangered species, "lol get gud bruv".

We are not the same.


> through employment mandates

Be careful not to create a permanent future of mandatory makework.


If we don't want makework, then don't mandate work to survive.

Simple.


Then what, exactly, do you mean by "employment mandates"? You're the one who suggested that.

Nobody "mandates" work to survive right now. But we do have a system where everything is owned by somebody, and ownership tends to concentrate. You can convince people to give you stuff they own (or money, which stands in for stuff) by doing work. The risk is that nobody will need your work, so you won't be able to convince them to give you any stuff, ownership will concentrate even more, and you will end up with nothing.

If Those In Charge require that "owners" continue to give you stuff for work that they don't need, that crude patch still doesn't provide you with any other way to get stuff. Which means you have to keep doing this useless work forever.

It's not about "don't mandate work". It's about fundamental change to the entire system.


You're panicking. We're gonna be fine.

We've done lots of automation before, and we all benefited immensely. Just chill and deal with problems as they come up.


That’s just faith though. And I see no driving reason to trust in your faith.

We’ve never seen anything remotely close to this kind of upheaval. This kind of techno-optimism makes sense in the very young, but seems painfully naive once you’ve been around the block a bunch of times.

There are no adults in the room driving this. There’s weird ultra elite people driving this forward with their competitive megalomaniacal egos. And we’re stuck in a game theoretic landscape where it’s effectively an inevitable race to a max AI future. None of this is a recipe for a stable or prosperous future.

If this wild accelerated future ends up being a utopia, I’ll be jazzed to eat my words. I just haven’t seen any utopias unfold in my life, and I’m skeptical of those who tell me to chill and embrace the chaos.


This.

Look, I've got enough things challenging my actual faith that I just cannot deal with piling on more so that the pro-AI camps don't have to do some critical thinking or basic empathy.

Those with the means have an obligation to help those without. It's basic cooperation. It's how humanity made it this far.


> We’ve never seen anything remotely close to this kind of upheaval.

I bet that has been said hundreds of times already through history.

I agree with the other poster, this will most likely be a good thing for mankind overall. I'm excited for the future.


The "problems that come up" will be people homeless and starving.

I'm 100% with stego-tech that I think we should address the major, glaring concerns that come with greater automation before that happens.

Because I care about my fellow human beings, and do not want them to suffer.


People starving en masse because there was no work to do anymore and no way to get paid, despite there being plentiful food, is something that has never happened. If the amount of money on the consumer side of the food economy were to shrink significantly, what should happen is that the price of food should also go down until people can buy food again. The stock has to move no matter what, otherwise it spoils.

Didn't that happen during most/all of history? There was never much of a problem growing more food, there just wasn't any reason to.

In Victorian England for example, it would have been easy to grow way more food and build way more housing, the government used violence to prevent it to enforce property rights of huge landowners (that then lost anyway, well, except the royal family of course)


And if the price that those people can afford is $0, because ClaudeGPTPilot MegaAGI took all their jobs? What then?

There is a price below which no farmer is going to sell, regardless of whether they have another buyer.

What you're saying effectively amounts to "come on, there's no way they'd actually let people starve in the streets! That's something that could never happen these days."

Think about whether there might be other things that "could never happen these days" that are happening right now, in various places in America.


If we're talking about huge groups, like thousands, of people having no money for food, it would behoove the government to pay for that food, just to avoid a revolt. Just as a matter of self-preservation.

Yeah? And do you think this government is likely to do that?

By "this government" you mean Trump's administration? If so, then to answer your question, I need to presuppose that there's going to be such a massive revolution in automation within the next three years (already a dubious proposition IMO) that intervening with food aid is going to be a concern for the US's federal government. We're talking millions of people across the US who might start rioting because they need to eat. That's the scenario you want me to consider? I don't know what the logistics might be, but if the state governments let their local situations fester to that degree it might be too late to do anything federally.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: