Hacker Newsnew | past | comments | ask | show | jobs | submit | marmarama's commentslogin

In British English, rather than "very not bad", you might say "not bad at all", which is higher praise than just "not bad".


Well, if you must…


American Southerners say “ain't half bad”.


Loads of GPUs with Vulkan support use TBDR. The Adreno GPU in the Steam Frame's SnapDragon SoC, for one.

There is also a Vulkan driver for the M1/M2 GPU already, used in Asahi Linux. There's nothing special about Apple's GPU that makes writing a Vulkan driver for it especially hard. Apple chooses to provide a Metal driver only for its own reasons, but they're not really technical.


No. For best performance, you have to batch your calls/memory access patterns with TBDR in mind. Dropping in a Steam PC game (indy, AA/AAA) game render pipeline, specifically optimized for Nvidia/AMD/Intel, to a TBDR GPU, is going to give poor performance. That's the context of this discussion. Round pegs DO fit into square holes, you just have to make sure the hole is bigger than would normally be necessary. ;)

Steam frame is more for streaming PCVR than running existing PCVR games natively.


I already run stuff that was very much not made with TBDR in mind, on TBDR GPU architectures, and the performance is perfectly fine.

For sure, you can squeeze a few percentage points more out if you optimize for TBDR, and there are some edge cases where it's possible to make TBDR architectures behave pathologically, but it's not that big a deal in the real world.

I also disagree that the Steam Frame is for streaming primarily. If it was, why put such a powerful SoC in it or using it as the prototype device for doing x86 emulation with Fex?

The Adreno 750 is a 3 TFlops GPU that _should be_ substantially faster than a PS4 or a Steam Deck. It'll play plenty of low-end PCVR games pretty well on its own, if Fex's x86 emulation is performant, which it is.

Like the Meta Quest 2, it's a crossover device that a lot of people will just use standalone.


It's not just Chrome, it's everything, though apps that have a large number of dependencies (including Chrome and the myriad Electron apps most of us use these days) are for sure more noticeable.

My M4 MacBook Pro loads a wide range of apps - including many that have no Chromium code at all in them - noticeably slower than exactly the same app on a 4 year old Ryzen laptop running Linux, despite being approximately twice as fast at running single-threaded code, having a faster SSD, and maybe 5x the memory bandwidth.

Once they're loaded they're fine, so it's not a big deal for the day to day, but if you swap between systems regularly it does give macOS the impression of being slow and lumbering.

Disabling Gatekeeper helps but even then it's still slower. Is it APFS, the macOS I/O system, the dynamic linker, the virtual memory system, or something else? I dunno. One of these days it'll bother me enough to run some tests.


That's the story the proponents of the AI bubble would have you believe, because they are sucking in all available funding to their enrichment, or because they've been huffing their own hype gas for so long that they have no brain cells of their own left.

It is, however, complete nonsense, and the next few years of failed promises on AGI will eventually bring people to their senses, if a market crash and sustained economic depression doesn't do that first. It would be funny if it wasn't going to cause suffering for millions of people, whether we succeed at AGI or not.

I _like_ AI, I find LLMs and many other aspects of useful, and I am optimistic for the long term prospects of AI. But the rush to try and get to AGI is completely out of control at this point, and the fallout from when the bubble pops will set AI, and our societies, back a long time.


Having bought a few Matter devices now, I have discovered that, in practise, Matter is just as full of vendor extensions as ZigBee, and the quirks ecosystem that allows for interoperability despite vendor extensions is far less mature than with ZigBee.

Maybe this will get better with time, but we're half a decade into the Matter era and the end-user experience is _worse_ than with ZigBee. In that sense, Matter has failed.



It's really just a performance tradeoff, and where your acceptable performance level is.

Ollama, for example, will let you run any available model on just about any hardware. But using the CPU alone is _much_ slower than running it on any reasonable GPU, and obviously CPU performance varies massively too.

You can even run models that are bigger than available RAM too, but performance will be terrible.

The ideal case is to have a fast GPU and run a model that fits entirely within the GPU's memory. In these cases you might measure the model's processing speed in tens of tokens per second.

As the idealness decreases, the processing speed decreases. On a CPU only with a model that fits in RAM, you'd be maxing out in the low single digit tokens per second, and on lower performance hardware, you start talking about seconds over token instead. If the model does not fit in RAM, then the measurement is minutes per token.

For most people, their minimum acceptable performance level is in the double digit tokens per second range, which is why people optimize for that with high-end GPUs with as much memory as possible, and choose models that fit inside the GPU's RAM. But in theory you can run large models on a potato, if you're prepared to wait until next week for an answer.


+1

> It's really just a performance tradeoff, and where your acceptable performance level is.

I am old enough to remember developers respecting the economics of running the software they create.

Ollama running locally paired occasionally with using Ollama Cloud when required is a nice option if you use it enough. I have twice signed up and paid $20/month for Ollama Cloud, love the service, but use it so rarely (because local models so often are sufficient) that I cancelled both times.

If Ollama ever implements a pay as you go API for Ollama Cloud, then I will be a long term customer. I like the business model of OpenRouter but I enjoy using Ollama Cloud more.

I am probably in the minority, but I wish subscription plans would go away and Claude Code, gemini-cli, codex, etc. would all be only available pay as you go, with ‘anti dumping’ laws applied to running unsustainable businesses.

I don’t mean to pick on OpenAI, but I think the way they fund their operations actually helps threaten the long term viability of our economy. Our government making the big all-in bet on AI dominance seems crazy to me.


Which is as-designed. Vulkan (and DX12, and Metal) is a much more low-level API, precisely because that's what professional 3D engine developers asked for.

Closer to the hardware, more control, fewer workarounds because the driver is doing something "clever" hidden behind the scenes. The tradeoff is greater complexity.

Mere mortals are supposed to use a game engine, or a scene graph library (e.g. VulkanSceneGraph), or stick with OpenGL for now.

The long-term future for OpenGL is to be implemented on top of Vulkan (specifically the Mesa Zink driver that the blog post author is the main developer of).


> Closer to the hardware

To what hardware? Ancient desktop GPUs vs modern desktop GPUs? Ancient smartphones? Modern smartphones? Consoles? Vulkan is an abstraction of a huge set of diverging hardware architectures.

And a pretty bad one, on my opinion. If you need to make an abstraction due to fundamentally different hardware, then at least make an abstraction that isn't terribly overengineered for little to no gain.


Closer to AMD and mobile hardware. We got abominations like monolithic pipelines and layout transition thanks to the first, and render passes thanks to the latter. Luckily all of these are out or on their way out.


Not really, other than on desktops, because as we all know mobile hardware gets the drivers it gets on release date, and that's it.

Hence why on Android, even with Google nowadays enforcing Vulkan, if you want to deal with a less painful experience in driver bugs, better stick with OpenGL ES, outside Pixel and Samsung phones.


Trying to fit both mobile and desktop in the same API was just a mistake. Even applications that target both desktop and mobile end up having significantly different render paths despite using the same API.

I fully expect it to be split into Vulkan ES sooner or later.


100%. Metal is actually self-described as a high level graphics library for this very reason. I’ve never actually used it on non-Apple hardware, but the abstractions for vendor support is there. And they are definitely abstract. There is no real getting-your-hands-dirty exposure of the underlying hardware


Metal does have to support AMD and Intel GPUs for another year after all, and had to support NVIDIA for a hot minute too.


Wow, what a brain fart. So much of metal has improved since M-series, I just forgot it was even the same framework entirely. Even the stack is different now that we have metal cpp and swift++ interop with unified memory access.


> fewer workarounds because the driver is doing something "clever" hidden behind the scenes.

I would be very surprised if current Vulkan drivers are any different in this regard, and if yes then probably only because Vulkan isn't as popular as D3D for PC games.

Vulkan is in a weird place that it promised a low-level explicit API close to the hardware, but then still doesn't really match any concrete GPU architecture and it still needs to abstract over very different GPU architectures.

At the very least there should have been different APIs for desktop and mobile GPUs (not that the GL vs GLES split was great, but at least that way the requirements for mobile GPUs don't hold back the desktop API).

And then there's the issue that also ruined OpenGL: the vendor extension mess.


> specifically the Mesa Zink driver

https://docs.mesa3d.org/drivers/zink.html


> the majority of the population laid off from their office jobs will have plenty of work to do in the fields and food processing plants if they want to eat.

Indeed. The destination is agricultural serfdom, with current billionaires or their descendants as Lords of the manor.


Use the LGPL licensing option?


Have fun dodging that minefield when distributing your app.


Is it really so complicated? Genuinely curious.


Centrino was Intel's brand for their wireless networking and laptops that had their wireless chipsets, the CPUs of which were all P6-derived (Pentium M, Core Duo).

Possibly you meant Celeron?

Also the Pentium 4 uarch (Netburst) is nothing like any of the Atoms (big for the time out-of-order core vs. a small in-order core).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: