True but that will mean a greater "winner takes all" scenario where a small cadre of 8-figure compensated hyper-managers and tastemakers will supervise armies of agents. To 99% of people who lose their job doing some easily commodifiable task, this scenario is indistinguishable from AI taking every job.
I think most of the issues with "vibe coding" is trusting the current level of LLM's with too much, as writing a hacky demo of a specific functionality is 1/10 as difficult as making a fully-fledged, dependable, scalable version of it.
Back in 2020, GPT-3 could code functional HTML from a text description, however it's only around now that AI can one-shot functional websites. Likewise, AI can one-shot a functional demo of a saas product, but they are far from being able to one-shot the entire engineering effort of a company like slack.
However, I don't see why the rate of improvement will not continue as it has. The current generation of LLM's haven't been event trained yet on NVidia's latest Blackwell chips.
I do agree that vibe-coding is like gambling, however that is besides the point that AI coding models are getting smarter at a rate that is not slowing down. Many people believe they will hit a sigmoid somewhere before they reach human intelligence, but there is no reason to believe that besides wishful thinking.
That AI would be writing 90% of the code at Anthropic was not a "failed prediction". If we take Anthropic's word for it, now their agents are writing 100% of the code:
Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.
One thing I like to think about is: If these models were so powerful why would they ever sell access? They could just build endless products to sell, likely outcompeting anyone else who needs to employ humans. And if not building their own products they could be the highest value contractor ever.
Well there are models that Anthropic, OpenAI and co. have access to that they haven't provided public API's for, due to both safety, and what you've cited as the competitive advantage factor. (like Openai's IMO model, though it's debatable if it represented an early version of GPT 5.1/2/3 or something else)
The thing however is the labs are all in competition with each other. Even if OpenAI had some special model that could give them the ability to make their own Saas and products, it is more worth it for them to sell access to the API and use the profit to scale, because otherwise their competitors will pocket that money and scale faster.
This holds as long as the money from API access to the models is worth more than the comparative advantage a lab retains from not sharing it. Because there are multiple competing labs, the comparative advantage is small (if OpenAI kept GPT-5.X to themselves, people would just use Claude and Anthropic would become bigger, same with Google).
This however may not hold forever, it is just a phenomena of labs focusing more on heavily on their models with marginal product efforts.
I love this and will make it my motto. Scale yourself 100x every 3 years, or you're too slow. If I manage to keep it up roughly 11 years I will finally achieve planet scale.
Oddly enough, this is just the American Dream under exponential growth. "Someday you'll be rich as well" is just weaponized hope, and folks that follow GP's advice gobble it up because it's aspirational.
I feel like I'm caught in between two schizophrenically myopic perspectives on AI.
One being:
>Generative AI is a product of VC-funding enabled hype, enormous subsidies and fraudulent results. No AI code "really" works or contributes to productivity, and soon the bubble will burst, returning Real Software Engineers to their former peerless ascendency.
And the other perspective:
>The AI boom will be the last chance to make money, after which point your socioeconomic status circa 2028 will be the permanent station of all your progeny, who will enjoy a heavenly post-scarcity experience with luxury amenities scaled by your PageRank equivalent of social connections to employees at leading AI labs.
Only about 20% of front-page links are related to AI. I think it's impossible to have a productive discussion on the tech industry nowadays without AI in context.
On the contrary, I don't think productive discussions (or even interesting ones) can be had about AI. We've seen what it has to offer (not much), now we are just waiting for the hype bubble to burst like it did for blockchain and so many other things before.
No matter what, people are going to still use cars because they are an absolute advantage over public transportation for certain use cases. It is better that the existing status quo is improved to reduce death rates, than hope for a much larger scale change in infrastructure (when we have already seen that attempts at infrastructure overhaul in the US, like high-speed rail, is just an infinitely deep money pit)
Even though the train system in Japan is 10x better than the US as a whole, the per-capita vehicle ownership rate in Japan is not much lower than the US (779 per 1000 vs 670 per 1000). It would be a pipe dream for American trains/subways to be as good as Japan, but even a change that significant would lead to a vehicle ownership share reduced by only about 13%.
Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Like what is the believed inflection point that changes us from the current situation (where all of the state-of-the-art models are roughly equal if you squint, and the open models are only like one release cycle behind) to one where someone achieves a clear advantage that won't be reproduced by everyone else in the "race" virtually immediately.
I _think_ the idea is that the first one to hit self improving AGI will, in a short period of time, pull _so_ far ahead that competition will quickly die out, no longer having any chance to compete economically.
At the same time, it'd give the country controlling it so much economic, political and military power that it becomes impossible to challenge.
I find that all to be a bit of a stretch, but I think that's roughly what people talking about "the AI race" have in mind.
They ultimately want to own everyone's business processes, is my guess. You can only jack up the subscription prices on coding models and chatbots by so much, as everyone has already noted... but if OpenAI runs your "smart" CRM and ERP flows, they can really tighten the screws.
If you have the greatest coding agent under your thumb, eventually you orient it toward eating everything else instead of letting everybody else use your agent to build software & make money. Go forward ten years, it's highly likely GPT, Gemini, maybe Claude - they'll have consumed a very large amount of the software ecosystem. Why should MS Office exist at all as a separate piece of software? The various pieces of Office will be trivial for the GPT (etc) of ten years out to fully recreate & maintain internally for OpenAI. There's no scenario where they don't do what the platforms always do: eat the ecosystem, anything they can. If a platform can consume a thing that touches it, it will.
Office? Dead. Box? Dead. DropBox? Dead. And so on. They'll move on anything that touches users (from productivity software to storage). You're not going to pay $20-$30 for GPT and then pay for DropBox too, OpenAI will just do an Amazon Prime maneuver and stack more onto what you get to try to kill everyone else.
Google of course has a huge lead on this move already with their various prominent apps.
Dropbox is actually a great example of why this isn't likely to happen. Deeper pocketed competition with tons of cloud storage and the ability to build easy upload workflows (including directly into software with massive install base) exists, and showed an active interest in competing with them. Still doing OK
Office's moat is much bigger (and its competition already free). "New vibe coded features every week" isn't an obvious reason for Office users to switch away from the platform their financial models and all their clients rely on to a new upstart software suite
> Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Because the first company to have a full functioning AGI will most likely be the most valuable in the world. So it is worth all the effort to be the first.
> Because the first company to have a full functioning AGI will most likely be the most valuable in the world.
This may be what they are going for, but there are two effectively religious beliefs with this line of thinking, IMO.
The first is that LLMs lead to AGI.
The second is that even if the first did turn out to be true that they wouldn't all stumble into AGI at the same time, which given how relatively lockstep all of the models have been for the past couple of years seems far more likely to me than any single company having a breakthrough the others don't immediately reproduce.
There's a synergy effect here - Tesla sells you a solar roof and car bundle, the roof comes without a battery (making it cheaper) and the car now gets a free recharge whenever you're home (making it cheaper in the long term).
Of course that didn't work out with this specific acquisition, but overall it's at least a somewhat reasonable idea.
In comparison to datacenters in space yes. Solar roofs are already a profitable business, just not likely to be high growth. Datacenters in space are unlikely to ever make financial sense, and even if they did, they are very unlikely to show high growth due to continuing ongoing high capital expenses inherent in the model.
I think a better critique of space-based data centres is not that they never become high growth, it's just that when they do it implies the economy is radically different from the one we live in to the degree that all our current ideas about wealth and nations and ownership and morality and crime & punishment seem quaint and out-dated.
The "put 500 to 1000 TW/year of AI satellites into deep space" for example, that's as far ahead of the entire planet Earth today as the entire planet Earth today is from specifically just Europe right after the fall of Rome. Multiplicatively, not additively.
There's no reason to expect any current business (or nation, or any given asset) to survive that kind of transition intact.
It's obviously a pretty weird thing for a car company to do, and is probably just a silly idea in general (it has little obvious benefit over normal solar panels, and is vastly more expensive and messy to install), but in principle it could at least work, FSOV work. The space datacenter thing is a nonsensical fantasy.
Big tech businesses are convinced that there must be some profitable business model for AI, and are undeterred by the fact that none has yet been found. They want to be the first to get there, raking in that sweet sweet money (even though there's no evidence yet that there is money to be made here). It's industry-wide FOMO, nothing more.
Typically in capitalism, if there is any profit, the race is towards zero profit. The alternative is a race to bankrupt all competitors at enormous cost in order to jack up prices and recoup the losses as a monopoly (or duopoly, or some other stable arrangement). I assume the latter is the goal, but that means burning through like 50%+ of american gdp growth just to be undercut by china.
Imo I would be extremely angry if I owned any spacex equity. At least nvidia might be selling to china in the short term... what's the upside for spacex?
Again, different markets, because I'm not going to do either of those things—if I'm ordering online amazon has better selection, and if I want to walk somewhere to pick something up I'm not going to wait for shipping.
taxi apps, delivery apps, social media apps—all of these require a market that's extremely expensive to build but is also extremely lucrative to exploit and difficult to unseat. You see this same model with big-box stores displacing local stores. The secret to making a lot of money under capitalism is to have a lot of money to begin with.
Taxi apps—uber & lyft. They moved into an area (often illegally); spent a shit-ton of money to displace local legal taxis, and then jacked up prices when the competition ceased to exist. Now I can't hail a taxi anymore if I don't have a phone.
> None of the big-box stores have created a monopoly.
They do in my region. Mom and pop shops are gone.
> Amazon unseated behemoth Walmart with a mere $300,000 startup capital.
We've been over this—they occupy different markets.
> Musk founded his empire with $28,000.
Sure. It would have been far easier to do with more capital.
People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.
They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.
those gross profit margins aren't that useful since training at fixed capacity is continually getting cheaper, so there's a treadmill effect where staying in business requires training new models constantly to not fall behind. If the big companies stop training models, they only have a year before someone else catches up with way less debt and puts them out of business.
Only if training new models leads to better models. If the newly trained models are just a bit cheaper but not better most users wont switch. Then the entrenched labs can stop training so much and focus on profitable inference
Well thats why the labs are building these app level products like claude code/codex to lock their users in. Most of the money here is in business subscriptions I think, how much savings would be required for businesses to switch to products that arent better, just cheaper?
Stop this trope please. We (1) don't really know what their margins are and (2) because of the hard tie-in to GPU costs/maintenance we don't know (yet) what the useful life (and therefore associated OPEX) is of GPUs.
> If they stopped training and building out future capacity they would already be raking in cash.
That's like saying "if car companies stopped researching how to make their cars more efficient, safer, more reliable they'd be more profitable"
A significant number of AI companies and investors are hoping to build a machine god. This is batshit insane, but I suppose it might be possible. Which wouldn't make it any more sane.
But when they say, "Win the AI race," they mean, "Build the machine god first." Make of this what you will.
I’m not certain spacex is generating much cash right now ?
Starship development is consuming billions. F9 & Starlink are probably profitable ?
I’d say this is more shifting of the future burden of xAI to one of his companies he knows will be a hit stonk when it goes public, where enthusiasm is unlikely to be dampened by another massive cash drain on the books.
That may be the plan, but this is also a great way for GDPR's maximum fine, based on global revenue, to bite on SpaceX's much higher revenue. And without any real room for argument.
Knowing what users want and need is more the essence of a product manager, not a software engineer.
Software engineering is solving problems given a set of requirements, and determining the value, need and natural constraints of those requirements in a given system. Understanding users is a task interfaces with software engineering but is more on the "find any way to get this done" axis of value rather than the "here is how we will get it done" one.
I'd say what OP is referencing is that LLM's are increasingly adept at writing software that fulfills a set of requirements, with the prompter acting as a product manager. This devalues software engineers in that many somewhat difficult technical tasks, once the sole domain of SWEs is not commodified via agentic coding tools.
That's a dangerous distinction in the AI era. If you reduce your work to solving problems given a set of requirements, you put yourself in direct competition with agents. LLMs are perfect for taking a clear spec and outputting code. A "pure" engineer who refuses to understand the product and the user risks becoming just middleware between the PM and the AI. In the future, the lines between PM and Tech Lead will blur, and the engineers who survive will be those who can not only "do as told" but propose "how to do it better for the business"
> Software engineering is solving problems given a set of requirements, and determining the value, need and natural constraints of those requirements in a given system
That’s the description of a mid level code monkey according to every tech company with leveling guidelines and easily outsourced and commoditized before the age of AI.
And most of the 3 million developers working in the
US aren’t working for a FAANG and will never make over $200K inflation adjusted. If you look at the comp of most “senior developers” outside of FAANG and equivalent, you’ll see that the comp has verb stagnant and hasn’t kept up with inflation for a decade.
I have personally given the thumbs down to two developers who came from a FAANG when it was clear that they were just code monkeys who had to have everything handed to them.
Have you looked at how hard it is for mid level code monkeys even from a FAANG to get a job these days? Just being able to reverse a b tree isn’t enough anymore.
FWIW, I did a 3.5 year stint at AWS until late 2023 Professional Services (full time with the same 4 year comp structure as software devs get). But made about 20% less and it was remote the full time I was there. and I’m very well aware of what software developers make.
I still work full time at a consulting company (cloud + app dev). And no FAANG doesn’t pay enough difference than what I make now to give up remote work in state tax free relatively low cost of living central Florida at 50 years old and grown (step)kids
Great, then we can use AI to solve the problems given a set of requirements, and spend more time thinking about what the requirements are by understanding the users.
PM and software development will converge more and more as AI gets better.
The best PMs will be the ones who can understand customers and create low-fidelity prototypes or even "good enough" vibe coded solutions to customers
The best engineers will be the ones who use their fleet of subagents to work on the "correct" requirements, by understanding their customers
At the end of the day, we are using software to solve people's problems. Those who understand that, and have skills around diving in and navigating people's problems will come out ahead
The US's entire economy depends on tech. They won't do anything that would compromise the integrity and viability of the international tech industrial complex.
In the US you also are not arrested for social media posts like you are in the UK or other parts of Europe.
At the moment, you can’t GET INTO the US if you have a social media post that even criticises the administration. Is that the “free speech” people in the US are so obsessed with?
reply