I've always felt that AI's main contribution to eliminating jobs is giving CEOs the ability to do layoffs while trying to both separate themselves from the current economic uncertainty and imply that they are an AI company.
Companies do this all the time. A CEO's job is to convince investors that their company stands to win in whatever the current hot trend is. During bitcoin's crazy run in like 2022 or whatever, a ton of tech companies were hopping on the bandwagon and branding themselves as a blockchain company. Look at Block/Square. The current trend is that AI is hot and the economy isn't. Therefore, it's beneficial to the stock price to tell your investors that you're laying off 50% of your staff because you're AI-powered. Just look at Block/Square. My experience has been that most companies have an incredibly patchwork implementation of AI, and that most of the work that they do (particularly larger companies) isn't made more efficient by using AI.
In a few years, there will be some new hotness, and all companies will be saying that the DNA of their company is whatever that is.
As for the current uncertainty in the job market, when you randomly have 50% tariffs slapped on goods you need and can't readily find available in the US for the same price and find that 20% of the world's oil supply is cut off, you tend to not want to invest in the future. Talking about AI is cheap. Tariffs are expensive.
AI is about to get a lot more expensive as Taiwan (TSMC) and other South East Asian chip manufacturers don't get their Natural Gas or the Natural Gas they need becomes really expensive.
I'd be interested to see if the actual cost of AI will actually have any impact on how often CEOs end up talking about it. In my experience, there's a certain level of assessment that goes into whether or not a line item on your expenses is considered a problem or an investment. If you can still hand wave your way into convincing investors that $200K in AI credits replaces 3 $200K/year software engineers, even though it used to be $100K for the same amount of credits, you might be fine. At some point, some part of that equation will likely fall out of favor with investors or the math will no longer work out, and maybe it's the cost of natural gas or helium.
>AI is about to get a lot more expensive as Taiwan (TSMC) and other South East Asian chip manufacturers don't get their Natural Gas or the Natural Gas they need becomes really expensive.
Also, before the war Trump got GCC countries to promise they will invest $ 2 billion into AI. Now those money will probably not come anymore.
Also, the power will get more expensive, so running AI data centers will be more expensive.
Not sure about GCC countries not paying. Vassals don't really get a say in anything. As for oil and gas deliveries, that is where "force majeure" can be activated.
I also think equating good = "no monetization" is exactly how we've ended up in a situation where everything is controlled by a few giant mega corps, hordes of MBAs, and unethical ad networks.
We should want indie developers, writers, etc to make money so that the only game in town doesn't end up being those who didn't care about being ethical. </rant>
The thing that actually killed Blockbuster was Carl Icahn. He bought up a bunch of shares and wanted to quickly turn a profit on the company. At the time, they were investing heavily into a Netflix-like service, which required a significant up front capital investment and, therefore, was losing money. Icahn, wanting to make a profit, decided to cut spending and basically not look forward at all. He got a quick, massive bump in stock price and jumped ship as it was crashing into the iceberg. Blockbuster was caught in the middle of a paradigm shift and found itself massively under prepared to deal with it.
This is interesting, but doesn't have to be correct.
If Blockbuster had kept pouring money into the new service, maybe it would have lost it all - I see no reason to think Blockbuster's movie rental franchise business would have 'transferrable skills' to allow it to succeed at streaming.
If it had been trying to pivot into a pizza delivery business (perhaps more transferable, in terms of locating franchises etc) would Icahn still have been 'killing' it?
My point is, maybe it was already dead and Icahn just prevented it from wasting a lot of money on the way down the drain.
When I was really early in my career, a mentor told me that code review is not about catching bugs but spreading context (i.e. increasing bus factor.) Catching bugs is a side effect, but unless you have a lot of people review each pull request, it's basically just gambling.
The more expensive and less sexy option is to actually make testing easier (both programmatically and manually), write more tests and more levels of tests, and spend time reducing code complexity. The problem, I think, is people don't get promoted for preventing issues.
This depends on the industry. I work on industrial machine control software, and we spend a huge amount of time on tests. We have to for some parts (human safety crtitical), but other parts would just be expensive if they failed (loss of income for customers, and possibly damaged equipment).
The key to making this scalable is to make as few parts as possible critical, and make the potential bad outcomes as benign as possible. (This lets you go to a lower rating in whatever safety standard applies to your industry.) You still need tests for the less critical parts though, while downtime is better than injury, if you want to sell future machines to your customers you need to have a good track record. At least if you don't want to compete on cost.
You have to make sure it doesn't arrive at you before it is on the dashboard. Otherwise you are why it is blowing up the time to fix a bug metric. Unless you can make the problem so obscure other smart people asked to help you can't figure it out thus making you look bad.
That's not preventing the issue, though. The closest you can get to this is to have another competitor be burned hard and demonstrate how your code base has the exact same issue. But even that isn't guaranteed. "that can't happen here" is a hard mindset to disrupt unless you yourself are already a C suite.
One of the major things code review does is prevent that one guy on your team who is sloppy or incompetent from messing up the codebase without singling him out.
If you told someone "I don't trust you, run all code by me first" it wouldn't go well. If you tell them "everyone's code gets reviewed" they're ok with it.
Everyone is sloppy sometimes. I wonder if what code review does is prevent velocity (acts a a brake) so that things dont change too fast (which is often a good thing).
You don't get paid for features or code shipped. People don't pay $200 a head for fine dining based on the number of carrot chops or garlic crushes. The chops and crushes are necessary but not what you should be optimizing for.
I think of code review more about ensuring understandability. When you spend hours gathering context, designing, iterating, debugging, and finally polishing a commit, your ability to judge the readability of your own change has been tainted by your intimate familiarity with it. Getting a fresh pair of eyes to read it and leave comments like "why did you do it this way" or "please refactor to use XYZ for maintainability", you end up with something more that will be easier to navigate and maintain by the junior interns who will end up fixing your latent bugs 5 years later.
> When I was really early in my career, a mentor told me that code review is not about catching bugs but spreading context (i.e. increasing bus factor.) Catching bugs is a side effect
This bs is what I say my juniors when I want them to fuck off with their reviews and focus on my actual work.
There are many days where I feel like the right thing for my career is to focus on building meaningful software that solves an actual problem. Then there are days like today, especially after seeing this.
They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
If they ever do anything again it will be a miracle. Meta is where smart people go to trade in their ambition and morals for stock grants and golden handcuffs.
Trading away your morals is definitely bad in a philosophical sense. Does selling your soul to the devil have a happy ending in any of the fairy tales?
>Meta is where smart people go to trade in their ambition and morals for stock grants and golden handcuffs.
Only Meta? Why not most of SV that's driven by ad revenue and data collection? Which big-tech company that pays crazy money is actually making the world a better place?
First you have to agree that Claude Code might be useful for some non-repo task, like helping with your taxes or organizing your bookmarks.
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
I share the feeling; but people using it are mostly non-technicals (despite the 50+ config files lol) and are just runing it constantly to do random things.
But a message bot + Claude Code/Codex would be the better version
I tried it for 2 days and honestly don't see the usefulness either. Although, the big reason is that I paired it with Claude, which only uses the per token billing method. Here are the few improvement on a simple Claude usage:
- As you mentioned, the message bot thing was kind of cool.
- It can browse the internet and act (like posting on MoltBook, which I tried).
- It has a a permanent "memory" (loads of .md files, so nothing fancy).
- It can be schedulded via cron jobs.
Overall, nothing really impressive. It is very gimmicky and it felt very unsafe the whole time (I had already read about the security issues, but sometimes you gotta live dangerously). The most annoying part was the huge token consumption (conversations start at 20k+ because of all the .md files) and it cost me roughly $12 for a few hours of testing.
Non-technical people haven't even heard of OpenClaw or Github, let alone know how to use and deploy them. Non-technical people don't even know what OS their Samsung or iPhone is called.
If you can find something on Github and deploy it on your system, you're part of the technical crowd.
>My hairdresser knew all about it and had ordered a Mac mini.
Your hairdresser can't be a technical person because they're a hairdresser ?? I know a surgeon who writes FOSS software as a hobby. What does profession have to do with being technical or not? Most technical people are self taught anyway.
I know them very well, and they are not a coder, or a 'technical person' by a broad HN definition.
What I'm saying is that we are at the point where technology is so pervasive in our society, and the lure of AI so seductive, that many more people are excited to try things out than I might have expected.
I suppose it has similarities to the early to mid 1980s and the home computing revolution. Where many people thought they should have a computer at home, even if they were not sure what they'd do with it.
Had someone at work as me about this and they visibly cringed with I told them its my understanding you let the agent unfettered access to everything on your machine so it can do a lot more stuff than say a Siri can.
They immediately said, "Why in the fuck would I want to do that?"
I didn't know either and then we both stood there in an awkward silence. I think he was expecting OpenClaw to be some insanely cool AI Agent and discovering the "juice isn't worth the squeeze" kind of hit him harder than I expected.
1) accessibility to non-technical folks. For the first time, they are having the Claude Code experience that we've had as software engineers for some time now
2) shared, community token context. Many end users are contributing to one agent's context together. This has emergent properties
Does it only work with chat apps? I've never used it, but I thought all the hype was from it being promoted as the first real general-purpose PC-using layer that could run on anything. What can it run on then?
Having been acquired by facebook, its a pretty accurate read.
If they land in the right org, they'll be allowed to maintain the open version (see https://www.mapillary.com/) However that's a rare outcome.
They'll be dumped in some org, and then bit by bit told that they can't do what they were doing before and now need to "forge alignment" or some other bullshit by posting on workplace.
They will need to deliver impact, But, as there are 3 other teams trying to do the same thing as you, you'll either be used as a battering ram by your org to smash the competition, or offered up as meat to save headcount.
> They get the team that built it and have more people on the AI initiative who are consumer-centric.
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
The only currency in a world where AI does everything is your ability to get human attention. So from that perspective moltbook is a huge success.
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
You can already see how the same thing has played out with computer games. With the modern engines such as Unity almost anyone can make a game. And almost everyone suffers.
And as a result there's now a million games most of which are poor quality asset flips. Everybody suffers, creators and consumers. Race to the bottom where the bottom has been reached. Prices are zero and earnings are zero.
If 15 years ago an indie game dev would allocate 80% to making the game and 20% to marketing etc. Today that will not get anything but it's much better to spend 20% on the game and 80% on the marketing, SEO optimization and attention harvesting. It's a shouting match where it's all about winning the shouting match not producing the best content.
There are millions of asset flips, but the top indie games have never been better. It’s hard for indie developers because there’s so much competition: you need to heavily promote a quality game only because there are so many other quality games.
Likewise these tools have enabled many more people to create vibe-coded slop, and may lead to more quality software (making it harder to stand out without marketing), but the best software will only get better.
The implication is that the gatekeeping has become marketing dollars, when it used to be skill at making a fun game. I don't think we're in a better situation today.
There are fun games that succeed without marketing, e.g. Balatro, and there are bad games that fail despite it, e.g. Highguard.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.
I disagree that accessibility is a detractor here.
There's never been a better time to be an indie dev. I'd rather have 1/1000 indie games be awesome than being force fed whatever storefront disguised as a game 'AAA' publishers poop out every year.
Just look at how slay the spire is doing up against marathon right now. Which of those was shouting the loudest? Highguard anyone?
It is true that the indy game market is brutal but it's always been brutal.
You don't really hear about a crisis at the indy game level though, rather at the AAA game level there is much of "we'd like to use our market power to take out the risk in game development" and then years later we realize they took out all the value before they took out the risk and now they're doomed.
... I think he's got an affinity for other people and organizations that have succeeded in the same way. The idea that somebody out there might have a workmanlike approach to life and be able to get consistent results at something would be a threat to his worldview.
In this case in particular it looks like an acquihire.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
It's also part of their longer-term trend of buying or burying any company that starts to get any press as a social media site of note outside of major players where that hasn't been an option.
Yet Zuck can somehow argue with a straight face FB has competition (apparently they straight up used to delete links to competitors like Google+ at the time, and also the constant copying of Snapchat) and Hacker News can split hairs over trivial definitions like "wdym fb no competition? email exists" or whatever
The person that got the top spot for "flashlight" in the app store back in the days made about $600k on it before apple made it a built in function. Just copied existing apps and got lucky. https://www.vg.no/nyheter/i/92ybl/erik-ble-app-millionaer-de...
Those “early” ai generated avatars created from you sending in a handful of your own photos. Absolutely printed money, hit right as mildly technical people could use the tech + the tech was developed enough, but before normal people could easily do it.
I am right there with you. We might lack the language to describe this emotional state; its like the opposite of FAFO? There's also this nuance that they were acquired by meta so yeah they're rich but now they're working for not-serious people and will flame out in 18 months.
> There's also this nuance that they were acquired by meta so yeah they're rich but now they're working for not-serious people and will flame out in 18 months.
For the lack of a better word, this feels like cope. In the modern world, being rich easily covers any of those other 'downsides'. Rich people will have a far better life than I and probably many other people here ever will, despite what the situation is like in the rest of their lives.
A lot of people find their lives ruined after suddenly becoming rich. Perhaps a second removed cousin tries to be your best pal out of nowhere, etc etc.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
The key seems to be to get rich slowly, or anonymously. Do not give people the idea you have more money than you know what to do with, and life will continue as it did before.
In the past ten years I have been frustrated by the tension between working on "interesting" or "important" stuff and working on dumb trendy shit. With the current LLM trend everything has become dumb trendy sshit, which has made the decision simpler.
It's easy to dismiss as more A.I. FOMO. I mean, Meta's AI has half the IQ of ChatGPT or Gemini. However, a fake social network full of generated content might well be a solution for Meta's problems where their userbase inevitably doesn't measure up to what they wish it would.
Was going to cynically suggest they were just going to merge the two sites and then pretend they had higher user counts at their next earnings, but adding even more (better?) fake content is probably the more plausible idea.
For each of these successes there are many failures, as evidenced by the deluge of “Show HN” slop (which is a small fraction of all vibe-coded slop).
Because these projects are simple, there’s nothing stopping you from working on one alongside your day job building meaningful software. You can vibe-code something that actually tries to solve a real problem. You can vibe-code something interesting to learn how to generally use these tools. Although, don’t expect to get hired by OpenAI or Meta (or make any money off it).
I've said it before, but a mexican line cook who doesn't speak english is contributing more to the world than the average Stanford educated AI engineer at Meta.
Over the years, Meta has bought a lot of "talent" based on a single hit, and they continue to be one-hit wonders despite being embedded at Meta, with ungodly amounts of resources at their disposal. e.g. none of the game studios they bought have produced new IP, all they do is produce content for the aging, pre-acquisition games
Perhaps I'm not totally clear on how this particular device works, but it doesn't seem like it has no ability to connect to the Internet.
Honestly, I'd say privacy is just as much about economics as it is technical architecture. If you've taken outside funding from institutional venture capitalists, it's only a matter of time before you're asked to make even more money™, and you may issue a quiet, boring change to your terms and conditions that you hope no one will read... Suddenly, you're removing mentions of your company's old "Don't Be Evil" slogan.
As far as the Salesforce acquisition goes, I'd be curious to see who made the decision to put Heroku into maintenance only mode.
I worked for a different part of Salesforce. I don't really feel like Salesforce did a ton of meddling in any of its bigger acquisitions other than maybe Tableau. I think the biggest missed opportunity was potentially creating a more unified experience between all of its subsidiaries. Though, it's hard to call that a failure since they're making tons of money.
It could be a case of post-founder leadership seeing that there's not a lot of room for growth and giving up. That happens a lot in the tech industry.
I can also imagine that you might need to change your definition of what an accomplishment is. I tend to think of it as something that has a measurable output, but difficult-to-measure progress towards an outcome is also something (despite what product managers might think)
At my last job, we ran all of our tests, linting/formatting, etc. through pre-commit hooks. It was apparently a historical relic of a time where five developers wanted to push directly to master without having to configure CI.
Companies do this all the time. A CEO's job is to convince investors that their company stands to win in whatever the current hot trend is. During bitcoin's crazy run in like 2022 or whatever, a ton of tech companies were hopping on the bandwagon and branding themselves as a blockchain company. Look at Block/Square. The current trend is that AI is hot and the economy isn't. Therefore, it's beneficial to the stock price to tell your investors that you're laying off 50% of your staff because you're AI-powered. Just look at Block/Square. My experience has been that most companies have an incredibly patchwork implementation of AI, and that most of the work that they do (particularly larger companies) isn't made more efficient by using AI.
In a few years, there will be some new hotness, and all companies will be saying that the DNA of their company is whatever that is.
As for the current uncertainty in the job market, when you randomly have 50% tariffs slapped on goods you need and can't readily find available in the US for the same price and find that 20% of the world's oil supply is cut off, you tend to not want to invest in the future. Talking about AI is cheap. Tariffs are expensive.
reply