Hacker Newsnew | past | comments | ask | show | jobs | submit | fabrice_d's commentslogin

WhatsApp was using libsignal (the C version) when I worked on the KaiOS integration in 2017/2018.


This is a cool project, and to render Simon's blog will likely become the #1 goal of AI produced "web browsers".

But we're very far from a browser here, so that's not that impressive. Writing a basic renderer is really not that hard, and matches the effort and low LoC from that experiment. This is similar to countless graphical toolkits that have been written since the 70s.

I know Servo has a "no AI contribution" policy, but I still would be more impressed by a Servo fork that gets missing APIs implemented by an AI, with WPT tests passing etc. It's a lot less marketable I guess. Go add something like WebTransport for instance, it's a recent API so the spec should be properly written and there's a good test suite.


100% agree this isn't a browser. It's better than the previous attempt but fails to render even basic html websites correctly and crashes constantly.

The fact that it compiles is better the the cursor dude. "It Compiles" is a very low bar to working software.


I think what I wanted to demonstrate here was less "You can build a browser with an agent", and more how bullshit Cursor's initial claim was, that "hundreds of agents" somehow managed to build something good, autonomously. It's more of a continuation of a blog post I wrote some days ago (https://emsh.cat/cursor-implied-success-without-evidence/), than a standalone proof of "agents can build browsers".

Unfortunately, this context is kind of implicit, I don't actually mention it in the blog post, which I probably should have done, that's my fault.


Some people looked at some of the authors background:

https://gameboat.bearblog.dev/the-resonant-computing-manifes...

There are actually things to be very skeptical about.


Some people are also opposed because of the negative externalities when building and running AI systems (environmental consequences, intellectual property theft), even if they understand that agentic coding "works". This is a valid position.


I have not seen those arguments in the context of what I would consider anti-hype. But in any case: There are certainly issues attached to usage of AI more generally.


Mozilla Corp. has > $1B in the bank. Their pockets are not empty.


I have an idea:

Take that $1B, invest it sensibly, and use the income to fund the development of an open, free browser in perpetuity.

Nah, that’ll never happen.


They already do that. They invest the endowment, and right now it exists as a firewall to cover operations in the event that their search licensing revenue becomes unstable. The annual growth of the endowment is not nothing, but it's also nowhere near enough to fund their browser development on a yearly basis.

And while I don't love the dabbling in ad tech, and I do think there's been confusion around the user interface, I think by far the most unfair smear Mozilla has suffered is to claim they haven't been focusing on the core browser. Every year they're producing major internal engine overhauls that deliver important gains to everything from WebGPU to spidermonkey, to their full overhaul of the mobile browser, to Fission/Site Isolation work.

Since their Quantum project, which overhauled the browser practically from top to bottom in 2017 and delivered the stability and performance gains that everyone was asking for, they've done the equivalent of one "quantum unit" of work on other areas in the browser on pretty much an unbroken chain from then until now. It just doesn't get doesn't mentioned in headlines.


How are they using that money to stay alive though


The billion laughs attack has well known solutions (basically, don't recurse too deep). It's not a reason to not implement DOCTYPE support.


> The billion laughs attack has well known solutions (basically, don't recurse too deep)

You can then recurse wide. In theory it's best to allow only X placeables of up to Y size.

The point is, Doctype/External entities do a similar thing to XSLT/XSD (replacing elements with other elements), but in a positively ancient way.


> I can get the source of the kernel, including all drivers, running on my android phone with a few clicks and build a custom ROM.

No, most drivers are closed source and you can just extract binary blobs for them. They run as daemons that communicate through the binder ipc - Android basically turned the Linux kernel into a microkernel.


Indeed, since Android 8 all drivers are in userspace and use Android IPC to talk to the Linux kernel.

Traditional Linux drivers are considered legacy in Android.

https://source.android.com/docs/core/architecture/hal


Yep, my Pixel 5 with stock OS and Pixel 6 with Graphene are hacked via WiFi blobs, which are not updated and cannot be patched.


Most of Firefox user base has always been on Windows, not Linux. What OS do you think the "techies" that promoted Firefox to replace IE in the first place were running?


Sure maybe 20 years ago. But back then Linux's userbase was also on Windows, because desktop Linux hadn't really become usable yet. I think nowadays Firefox's marketshare is a lot higher on e.g. Ubuntu (where it's the default) than it is on Windows (where Edge is the default).


According to Mozilla's own data at https://data.firefox.com/dashboard/hardware Windows (7, 10 & 11) make up 84% of their user base.


That's not the claim being made and you know it


Only thing that wasn't usable on Linux 20 years ago was games.


No, the phone variant of HarmonyOS runs on top of a Linux kernel.


I believe thats being phased out slowly to be native app only with their multidevice HarmonyOSNext (mobile/pc). Once the major apps move over , last bits of linux will be excised.


Nope, the new version removed it.

https://en.wikipedia.org/wiki/HarmonyOS_NEXT


Indeed, I did not see that!


It is absolutely Google's security issue if they use an open source project with that license:

https://git.ffmpeg.org/gitweb/ffmpeg.git/blob/HEAD:/COPYING....

and then expect volunteers to provide them fixes.


Google never asked a volunteer for a fix.

This is part of Google’s standard disclosure policy: it gets disclosed within 90 days starting from confirmation+contact.

If ffmpeg didn’t want to fix it, they could’ve just let the CVE get opened.


It's not just Google who could be affected by this.

> and then expect volunteers to provide them fixes.

Expect volunteers to provide everyone using the software with fixes.


For a bug in the LucasArts Smush codec? Why didn't you verify it was an mp4/h264 first?


Mp4 is an envelope codec, so it could be both an mp4 and an obscure codec


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: