Hacker Newsnew | past | comments | ask | show | jobs | submit | saadn92's commentslogin

What probably happened here is depressingly common in early-stage startups. Someone finds an open source tool that does 80% of what they need, forks it, strips the branding, and then ships it. Nobody thinks about the license because the company is in "move fast" mode and there's no process for it yet.

Sure, the Apache 2.0 allows this, but the mistake is that when someone asked "is this based on SimStudio?" the answer was "we built it ourselves" instead of "yes, it's a fork, here's what we added." It went from a fixable attribution oversight to a credibility problem. You can retroactively add a LICENSE file, but can't take the lie back.


This is why I hope AI will destroy the entire SaaS market. These people should be selling used cars or life insurance and have no access to finance.

I'm sure all the vibe coded slop that eats the SaaS market will be better about license attribution.

I hear what you're saying but I still think I'd prefer LLM-orchestrated software (using third-party dependencies) to closed source SaaS made by developers who can't even adhere to software licenses. It's a level of Junior Dev Energy that's unforgivable.

I wonder how much of that is posturing (less charitably, lying to outsiders) and how much is the organization effectively lying to itself.

Both are indictment of today's ambient startup culture, and I'm not sure which is ultimately worse.


Based on DeepDelve's recent follow-up article, I would assume the former. https://deepdelver.substack.com/p/delve-fake-compliance-as-a...

Wow that's bad. Unsure if this is an outlier or typical for YC companies.

sadly this behaviour has become largely encouraged by YC

this is nuts

Every layer of the organization tells a more rosy version of the truth up the chain of command. The programmer might tell the PM that they're running Apache software with the serials filed off, but by the time that filters up the chain to the CEO / Board, the product is "fully proprietary and 100% built in-house"

Many companies do not want to deal with open source and want support and custom features. I personally think you’re underestimating the value these companies bring.

Wait, the thing we're talking about is Apache 2.0?

Yes, so it explicitly requires source attribution

The xAI piece is the one that stands out to me. $258B for a lab that's burning $1.46B/quarter against $430M revenue, valued almost entirely on a merger anchor from four months ago.

xAI's valuation comes from an internal transfer of Elon's. Elon has stated it's worth 258B and that's the only data point to go by.

It's absolutely bonkers and wrong but it's unlikely to raise to the level of actual misrepresentation.


Most of the numbers seen arbitrary to me. Why is Starship worth $170bn? Based on what analysis? Why 38x earnings for Starlink? Maybe the AI has some justification, but the way it is presented just looks flimsy.

As I wrote in the piece, I'm extremely skeptical that xAI should be valued as if it is a frontier lab.

But as you say, going back to the xAI + SpaceX merger, analysts consistently seem to value it as if it is, so I predict the public will too, at IPO time.


I assume "extremely skeptical" is you being generous, is there anybody other than Elon who says xAI/Grok are SOTA? The only thing anybody says about it is that it's only good for porn, but local models do porn too so xAI has no moat or edge at all as far as I can see.

There is actually a real bull case for xAI (that I don't endorse), e.g. from people who think that chips & computer is the main determiner of model quality. xAI may plausibly soon have the biggest training apparatus of anyone.

I think talent is more important than compute, as I wrote in my Jan 2026 predictions that Anthropic would end up on top this year: https://futuresearch.ai/blog/forecasting-top-ai-lab-2026/


If you don't spend any time comparing models to the point where you don't know about benchmarks, why do you care where people think the line for SOTA is?

The benchmark game is wholly gamed, but the proof is in the pudding. I know people using Anthropic, OpenAI, and Gemini. Chinese models locally. But who uses Grok for anything but porn? Whatever the benchmarks might say, Grok is just trash in practice. They spent too much time teaching it to be edgy and not enough time teaching it to code.

Ok, sounds like you're already mentally set

Sounds like you've got nothing to say for Grok besides meaningless benchmarks.

> I assume "extremely skeptical" is you being generous

I'm not sure that's the case. Every value in this forecast is absurd, I actually think the author is sincere in there feeling that they are being extremely skeptical.


It’s absolutely ludicrous that xAI is thrown into the mix at that valuation. They’re not even a player in AI other than providing Grok slop for twitter.

For $380B you can get both AT&T and Verizon and you pay ~1.55x the revenue. Why pay 38x for Starlink?

What do you mean $380B? This "fair market value" forecast also includes $147B for starlink enterprise and $75B for starlink direct-to-cell. So almost $600B all in.

Starlink is less than 10 years away from providing full data services to cellphones globally, allowing them to offer a better service at a cheaper rate to AT&T and Verizon. Not to mention more coverage.

Also, AT&T and Verizon customers don't love their provider. They despise them. I walked into a Verizon store last year and was outright scammed by the staff member into their insurance plan after explicitly declining it (they just added it to the bill anyway).

These legacy companies will be irrelevant.


> Starlink is less than 10 years away from providing full data services to cellphones globally

10 years is a long time, considering the global reactions to America under Trump, and also Musk's tight coupling to Trump, and even in the US given how many bridges Musk burned.

If Starlink was properly spun off and independent of Musk this would be much less of an issue, but now? Now the rest of the world is likely to treat it like the US treats Huawei.


Even if you think those are standard numbers and you're banking on growth, or whatever, I don't see any way anyone rational (or even a semi-rational AI bull) could convince themselves xAI isn't an absolute garbage company.

The hooks performance finding matches what I've seen. I run multiple Claude Code agents in parallel on a remote VM and the first thing I learned was that anything blocking in the agent's critical path kills throughput. Even a few hundred milliseconds per hook call compounds fast when you have agents making dozens of tool calls per minute.

The docker-based service pattern is smart too. I went a different direction for my own setup -- tmux sessions with worktree isolation per agent, which keeps things lightweight but means I have zero observability into what each agent is actually doing beyond tailing logs manually. This solves that gap in a way that doesn't add overhead to the agent itself, which is the right tradeoff.

Curious about one thing -- how does the dashboard handle the case where a sub-agent spawns its own sub-agents? Does it track the full tree or just one level deep?


Sub-agent trees are fully tracked by the dashboard. When an agent is spawned, it always has a parent agent id - claude is sending this in the hooks payload. When you mouse over an agent in the dashboard, it shows what agent spawned it. There currently isn't a tree view of agents in the UI, but it would be easy to add. The data is all there.

[Edit] When claude spawns sub-agents, they inherit the parent's hooks. So all sub-agents activity gets logged by default.


The stability question is real but I think it's framed wrong. The issue isn't whether an agent can write correct code in a single session -- they can, and pretty reliably now. It's whether there's a human with enough understanding of the codebase to debug it when something breaks at 2am.

I run parallel coding agents on my own projects daily. The code they produce is fine. What worries me is the "just ship it" energy where nobody on the team deeply understands what got built. That's not an AI problem, it's been a problem with outsourced codebases forever. AI just makes it faster to accumulate code nobody fully groks.

Cloudflare probably has the engineering depth to maintain this regardless of how it was built. A lot of other teams don't.


The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.

this is the right question to ask

The closing point is the one that should get more attention — every single one of these apps could be replaced by a web page. And from a product standpoint, there's really only one reason to ship a native app when your content is just press releases and weather alerts: you want access to APIs the browser won't give you. Background location, biometrics, device identity, boot triggers — none of that is available through a browser, and that's by, unfortunately, design.

> And from a product standpoint, there's really only one reason to ship a native app

I have worked on several applications where the product managers wanted to make our web app something that could be installed through the app store, because that's how users expect to get apps.

I know people who don't even type search queries or URLs into a browser, they just tell the phone what they want to find and open whatever shows up in a search result.

I've tried pushing back against the native app argument and won once because customers actually reported liking that we had a website instead of an app, and other times because deploying an app through the stores was more work than anyone had time to take on. Otherwise, we would've been deploying through app stores for sure.

Marketing gets plenty of data from google analytics or whatever platform they're using anyway, so neither they nor product managers actually care about the data from native APIs.


> I know people who don't even type search queries or URLs into a browser, they just tell the phone what they want to find and open whatever shows up in a search result.

I don't know exactly what you are talking about here, but if I wanted to find a restaurant that is local I definitely just type 'Miguels' into the browser and then it searches google for 'Miguels' automatically and it know's my location so the first result is going to be their website and phone number and I can load the website for the menu or just call if I know what my family wants.

However even then, I'd rather have an app for them where I can enter in the items I want to order. I've noticed apps tend to be more responsive. Maybe it's just the coding paradigm that the applications tend to load all of the content already and the actions I take in the app are just changing what is displayed, but on a website they make every 'action' trigger an API call that requires a response before it moves on to the next page? This makes a big difference when my connection isn't great.

I also find it easier to swap between active apps instead of between tabs of a browser. If I want to check on the status of the order or whatnot, it's easier to swap to the app and have that refresh then it is to click the 'tab' button of the browser and find the correct tab the order was placed in.


>I definitely just type 'Miguels' into the browser

So you open safari first. I think that’s a step further than what’s being described.

Many people it’s just “hey siri, book a table at Miguel’s.” And then click whatever app, web result, or native OS feature pops up.

It’s a chaotic crapshoot that I have never been able to stomach personally. For others, that’s just called using their phone.


This is pretty much what I meant. Even if the browser is what comes up, the fact is the user isn't interacting with the browser as a browser. They're interacting with their phone through an app (voice => search). They don't understand website URLs, or what search engines are doing. That makes it harder for them to return (engagement metrics!) than tapping the icon on their phone that opens up directly to the app.

It's also why so many websites try to offer push notifications or, back when it seemed like Apple wouldn't cripple it, the "add to home screen" or whatever CTA was that would set the website as an icon. Anything that gives the user a fast path back to engaging without having to deal with interacting with the browser itself is what PMs and marketing want.


I want to be really clear that I'm not trying to argue with your experience, just to understand it... but:

> However even then, I'd rather have an app for them where I can enter in the items I want to order.

Really? You want to download a different app for every restaurant you order from?


I recently took a trip to Hawaii, particularly Maui. I've never been before, but I hit the weather lottery and got to experience the Kona low system that raked the island with copious rain. Anyway... What I found, in the areas that we were staying, was that there were a lot of food trucks that looked to have great coffee, poke, food in general. But with the weather it was unclear if the food truck was 1) accessible 2) open due to other weather issues.

What I found was that none of these food trucks (and even some relatively nice restaurants) had operational web pages. One had a domain but, for some apparent reason, they posted the menu to <some-random-name>.azurewebsites.net. And that page just... Didn't work. The rest got even worse. Most had listings on Google Maps, but the hours and availability did not reflect reality. We went to a coffee food truck that wasn't there, even though the day before they had commented on a review. Then we had others that had a link to an Instagram page of which some claimed to house their "current" hours and location, yet we tried going to two of them and both weren't open.

It's 2026. If you have your business on Google Maps you should be able to update hours and availability quickly. But beyond that it costs almost nothing to host a simple availability page on a representative domain. And even if you don't want to deal with the responsibility of a domain, there are multitudes of other options. Now, I'm guessing that this isn't the norm for most of these vendors, at least I hope. But we weren't there during the worst of the rain, we hit the second low that went through in our timing. So while it was a significant amount of rain and some of the more treacherous switchback roads were closed - I'm talking about food trucks that were off of very accessible main roads & highways. My SO reached out via IG to about a half dozen vendors and only one responded 2 days later.

Clearly tech and simple services like availability and location that is easy to update is not accessible (or known) for these types of businesses. But it definitely does not require an app (nor should it). Having these simple "status" sites would have made the friction the weather caused significantly less than what we experienced. I don't want an app when I'm trying to find out if a restaurant is open. I, personally, don't find apps any more responsive. In many cases a lot of web sites are littered with far too many components that are not required. I've been doing a lot with Datastar and FastAPI recently and some of the tools I've thrown together (that handle hundreds of MB of data in-browser) load instantly and are blazing fast. So much so that I've been asked how I "did that". It's amazing how fast a web app can be when it's not pulling data from 27 different sources and loading who knows what for JS.


Exactly what big businesses do, and governments think what businesses do is good practice. Fore everyone to use an app.

The UK's Companies House (required for anyone who is a director or has a shareholding of more than 15% etc.) requires a Onegov ID now. They offer a web version with a scan of a photo ID (passport or driving license). I tried it. I thought one of those would work. Apparently the web version needs to ask security questions (reasonable, as the app used NFC to read your passport) but despite the vast amount of information the government has on me (to issue those IDs, to collect taxes, etc) it cannot do that, so i had to either use the app or go in person to a post office in a different town.

Similarly I got an email from Occado saying that if I used the app I could change orders without checking out again. If I do it on the website i have to checkout again. Why?


Today morning, I was checking TSA wait times. Guess what, they want you to install their app to get the wait time. [1]

[1](https://www.dhs.gov/check-wait-times)


In their defence, there is a fairly nice website too, not sure why it needs to have its own logo though

https://bwt.cbp.gov/


That's border wait times, not what the OP was looking for.

TSA used to have an API [0]. But, of course while the deprecation page still lives on the service does not.

Edit: Also looks like TSAWaitTimes.com [1] is an option, I'm sure their API works. o_O

[0] https://www.dhs.gov/archive/mytsa-api-documentation [1] https://www.tsawaittimes.com/


Quoth the Doctorow: "An app is just a website wrapped up in enough IP to make it a felony to modify it."

> there's really only one reason to ship a native app when your content is just press releases and weather alerts

The flip side is there are (presumably) real people downloading these apps. Maybe it’s a kid interested in a career in the FBI, or the family of someone who works there. Idk. (I thought it would contain a secure tip line or something, but the app seems to be a social-media front end first.)

I am willing to entertain that there is a legitimate reason for an app to exist without conceding that it should be a pile of trash.


> Background location, biometrics, device identity, boot triggers — none of that is available through a browser

Most browsers do in fact offer that level of granularity, especially for PWA usecases [0].

And from an indicators perspective, having certain capabilities turned off can make it easier to identify and de-anonymize individuals.

[0] - https://pwascore.com/


Fingerprint? Yeah. Deanonymize? No.

There's a considerable difference. And doing whatever one can to mitigate the former shouldn't be discouraged by falsely equivocating the latter.


Nope. Actual deanonymization.

You will of course need a couple additional threat intel feeds because what is provided via the browser itself isn't enough, but third party data vendors along with threat intel vendors are fairly cost effective.

I've seen a couple actual live demos of deanonymization a couple years ago - it's a capability that has existed in the Offensive Security space for a couple years now. And the company I'm alluding to is already live in Japan and Israel.


Not sure if this is still a thing, but some apps used to embed libraries very much tracking everything you do on the phone, including your live location and that was then sold to third parties.

Can't trust the government to make a usable webpage [0].

[0] https://realfood.gov/


> access to APIs

It's mostly static data. Just publish it under a URL that won't change. Then we could actually cache and archive it.


The APIs in question are client-side iOS and Android APIs. Most of these apps are just WebViews wrapped in spyware, which is the point. It doesn't matter that most of the content is static or already uses browser-native APIs for functionality like forms, gating access to this information behind a surveilance device is the point.

Very well said.

I've been thinking about this for a bit and there are a variety of reasons why it can be appealing for PMs to push for apps over webpages:

- No search competition, when you search on duckduckgo or google for the page a competitor can bid to show up, won't happen with an app.

- Notifications, this is a big one. We live in the attention economy and apps are more likely to slide into push notifications - with ads - than webpages.

- Some users have a mental model that more easily maps to "this app is my go to for this task" and struggle with webpages. That's a psychological and incentive issue. Apple support PWAs but just barely and don't like them because they don't partake in the 100 billion dollar revenue 30% payment processing extortion.

- More intrusive access and "better" targeted advertisement.

- Once an icon is on the home screen somewhere, chances are some users are going to use it because they notice the icon and would not have done so if it were just a tab inside the browser. The attention economy strikes again.

- Companies _love_ to build a relationship with customers. It's usually a very one sided and jealous relationship where getting the user to install an app is perceived as a step in that direction.

- Users are more willing to create accounts for apps than webpages (citation needed, this is just a gut feeling)

- On mainstream iOS and Android it's much harder to block ads in apps than it is in the browser.

I'm sure there are other reasons, but those alone explain why we see them so often.


The thruster fix is the part that gets me. They sent a command that would either revive thrusters dead since 2004 or cause a catastrophic explosion, then waited 46 hours for the round trip with zero ability to intervene. That's a production deployment with no rollback, no monitoring dashboard, and a 23-hour latency on your logs. They nailed it.

I'd argue that once you have a very well defined requirement doc that mostly kicks humans out of the picture, as well as a patient boss who doesn't want anything ASAP or "Tomorrow morning first thing", engineering is not that hard, and is almost...enjoyment.

> ASAP or "Tomorrow morning first thing"

like in "fast pacing environments" with "flat hierarchies" and "agile mindset"? :-D


As ASAP As Possible

As asap as possible or you can say rip in peace to yourself

A well defined doc evolves over time. it gets sharper with real-world scenarios, incidents, and experiments. Before Voyager 1, we didn’t have that kind of experience. You can’t predict everything upfront.

> Theory only takes you so far


I’d argue that you must not be working on interesting problems if you think that “engineering is not that hard”

I think their point is that the challenge becomes more enjoyable than tedious.

Most of us are working on problems that are boring and tedious, not hard.

That's the point. I haven't but I would like to, and I realize that the so called "engineering" problems I work on is NOT real engineering.

OK I was probably wrong about that "not hard" though.


Would sending voyager have been a real definite deadline?

Visiting this many planets was only possible due to a very rare alignment. It's a once a century event. That's why we sent two probes, not just one

Absolutely. You could wait decades or centuries for a useful planetary alignment.

Not really. Jupiter alone is good enough. Its huge mass accounts for almost all of the gain you get from any such slingshot. Launch windows from Jupiter to anywhere occur every 12 years. Voyager's alignment was captivating, but realistically if it hadn't happened, we would have just done separate Jupiter-Uranus and Jupiter-Neptune missions instead.

That was ballsy! But, sadly, it was a temporary hack. Both Voyager have degrading, unfixable thrusters. The rubber diaphragms in the hydrazine fuel tanks are degrading, shedding silicon dioxide (i.e. sand) microparticles into the thruster fuel. These particles are gradually clogging the thruster nozzles and reducing their thrust. Eventually, thrust will decline to the point that they could fire the thrusters all day long and still not impart enough momentum to point the probes at Earth. Once that happens, we'll lose contact with the probes.

They'd switched away from the primary thrusters in 2004 due to this degradation. Now the backups are so degraded that the primary thrusters are better again in comparison.

Thruster clogging will kill Voyagers in about five years if nothing else gets them first. The least degraded thrusters nozzles are down to 2% of their diameter --- 0.035mm of free-flow area remaining.

The Voyagers will probably celebrate their 50th anniversary, but not much beyond that. :-(

Kind of ignominious to be done in not by the inexorable decline of radioactivity but by an everyday materials science error of the sort we make on earth all the time. In the 1970s, we knew how to make hydrazine-compatible rubber. We just didn't use it for the Voyagers.


They're still functioning after ten times of the Voyager's projected lifetime, I can't call that an error.

Upvoted. Sooner or later the Grim Reaper comes for us all.

Based on the communication fix, they also didn't have a simulator, or tests, or complete source code, on a custom instruction set that wasn't well documented, so they had to reverse engineer how it worked. https://www.youtube.com/watch?v=YcUycQoz0zg&t=2366s

The "we chose a validated market so we could focus on learning distribution" framing is really smart and honestly something I wish I'd done earlier. I'm building a workspace/collaboration tool and spent way too long on the product side before realizing the distribution muscle is a completely separate skill that doesn't develop on its own.

One thing I've noticed from the sheets-as-backend pattern: the reason people keep reaching for spreadsheets is because the editing UX is instant and familiar.Any tool that wants to replace this for non-technical users needs to nail that "just click a cell and type" experience. That's the hard part honestly.


This. That's the only reason I'm on there too. I completely avoid the news feed, but it does help when you having people reaching out and you need jobs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: