This is a cool project, and to render Simon's blog will likely become the #1 goal of AI produced "web browsers".
But we're very far from a browser here, so that's not that impressive.
Writing a basic renderer is really not that hard, and matches the effort and low LoC from that experiment. This is similar to countless graphical toolkits that have been written since the 70s.
I know Servo has a "no AI contribution" policy, but I still would be more impressed by a Servo fork that gets missing APIs implemented by an AI, with WPT tests passing etc. It's a lot less marketable I guess. Go add something like WebTransport for instance, it's a recent API so the spec should be properly written and there's a good test suite.
I think what I wanted to demonstrate here was less "You can build a browser with an agent", and more how bullshit Cursor's initial claim was, that "hundreds of agents" somehow managed to build something good, autonomously. It's more of a continuation of a blog post I wrote some days ago (https://emsh.cat/cursor-implied-success-without-evidence/), than a standalone proof of "agents can build browsers".
Unfortunately, this context is kind of implicit, I don't actually mention it in the blog post, which I probably should have done, that's my fault.
Some people are also opposed because of the negative externalities when building and running AI systems (environmental consequences, intellectual property theft), even if they understand that agentic coding "works". This is a valid position.
I have not seen those arguments in the context of what I would consider anti-hype. But in any case: There are certainly issues attached to usage of AI more generally.
They already do that. They invest the endowment, and right now it exists as a firewall to cover operations in the event that their search licensing revenue becomes unstable. The annual growth of the endowment is not nothing, but it's also nowhere near enough to fund their browser development on a yearly basis.
And while I don't love the dabbling in ad tech, and I do think there's been confusion around the user interface, I think by far the most unfair smear Mozilla has suffered is to claim they haven't been focusing on the core browser. Every year they're producing major internal engine overhauls that deliver important gains to everything from WebGPU to spidermonkey, to their full overhaul of the mobile browser, to Fission/Site Isolation work.
Since their Quantum project, which overhauled the browser practically from top to bottom in 2017 and delivered the stability and performance gains that everyone was asking for, they've done the equivalent of one "quantum unit" of work on other areas in the browser on pretty much an unbroken chain from then until now. It just doesn't get doesn't mentioned in headlines.
> I can get the source of the kernel, including all drivers, running on my android phone with a few clicks and build a custom ROM.
No, most drivers are closed source and you can just extract binary blobs for them. They run as daemons that communicate through the binder ipc - Android basically turned the Linux kernel into a microkernel.
Most of Firefox user base has always been on Windows, not Linux. What OS do you think the "techies" that promoted Firefox to replace IE in the first place were running?
Sure maybe 20 years ago. But back then Linux's userbase was also on Windows, because desktop Linux hadn't really become usable yet. I think nowadays Firefox's marketshare is a lot higher on e.g. Ubuntu (where it's the default) than it is on Windows (where Edge is the default).
I believe thats being phased out slowly to be native app only with their multidevice HarmonyOSNext (mobile/pc). Once the major apps move over , last bits of linux will be excised.