I have mixed feelings about the "Do X in N lines of code" genre. I applaud people taking the time to boil something down to its very essence, and implement just that, but I feel like the tone is always, "and the full thing is lame because it's so big," which seems off to me.
I do prototyping for a living and ... I definitely do "X in 1/100th lines of code" regularly.
It's exciting, liberating... but it's a lie. What I do is to get the CORE of the idea so that I fully understand it. It's really nice because I get a LOT of millage very quickly... but it's also brittle, very brittle.
My experience is that most projects are 100x bigger than the idea they embody because the "real World" is damn messy. There are always radically more edge cases than the main idea enables. At some point you have to draw a line but the furthest away you draw the line, the more code you need to do it.
So... you are right to have mixed feeling, the tiny version is only valuable to get the point but it's not something one can actually use in production.
SO was built to disrupt the marriage of Google and Experts Exchange. EE was using dark patterns to sucker unsuspecting users into paying for access to a crappy Q&A service. SO wildly succeeded, but almost 20 years later the world is very different.
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
Whether people are interchangeable with each other isn't the point. The point is whether AI is interchangeable with jobs currently done by humans. Unless and until AI training requires 1000 different domain experts, the current projection is that at some point AI will be interchangeable with all kinds of humans...
There's an HDR war brewing on TikTok and other social apps. A fraction of posts that use HDR are just massively brighter than the rest; the whole video shines like a flashlight. The apps are eventually going to have to detect HDR abuse.
I know how bad the support for HDR is on computers (particularly Windows and cheap monitors), so I avoid consuming HDR content on them.
But I just purchased a new iPhone 17 Pro, and I was very surprised at how these HDR videos on social media still look like shit on apps like Instagram.
And even worse, the HDR video I shoot with my iPhone looks like shit even when playing it back on the same phone! After a few trials I had to just turn it off in the Camera app.
I wonder if it fundamentally only really makes sense for film, video games, etc. where a person will actually tune the range per scene. Plus, only when played on half decent monitors that don’t just squash BT.2020 so they can say HDR on the brochure.
The HDR implementation in Windows 11 is fine. And it's not even that bad in 11 in terms of titles and content officially supporting HDR. Most of the ideas that it's "bad" comes from the "cheap monitor" part, not windows.
I have zero issues and only an exceptional image on W11 with a PG32UQX.
Also if you get flashbanged by SDR content on Windows 11 there is a slider in HDR settings that lets you turn down the brightness of SDR content. I didn't know about this at first and had HDR disable because of this for a long time.
The only time I shoot HDR on anything is because I plan on crushing the shadows/raising highlights after the fact. S curves all the way. Get all the dynamic range you can and then dial in the look. Otherwise it just looks like a flat washed out mess most of the time
You would think, but not in a way that matters. Everyone still compresses their mixes. People try to get around normalization algorithms by clever hacks. The dynamics still suffer, and bad mixes still clip. So no, I don’t think streaming services fixed the loudness wars.
What's the history on the end to the loudness war? Do streaming services renormalize super compressed music to be quieter than the peaks of higher dynamic range music?
Yes. Basically the streaming services started using a decent model of perceived loudness, and normalise tracks to roughly the same perceived level. I seem to remember that Apple (the computer company, not the music company) was involved as well, but I need to re-read the history here. Their music service and mp3 players were popular back in the day.
So all music producers got out of compressing their music was clipping, and not extra loudness when played back.
It hasn't really changed much in the mastering process, they still are doing the same old compression. Maybe not the to the same extremes, but dynamic range is still usually terrible. They do it a a higher LUFS target than the streaming platforms normalize to because each streaming platform has a different limit and could change it at any time, so better to be on the safe side. Also the fact that majority of music listening doesn't happen on good speakers/environment.
> Also the fact that majority of music listening doesn't happen on good speakers/environment.
Exacly this. I usually do not want high dynamic audio because that means it's either to quiet sometimes or loud enough to annoy neighbors at other times, or both.
I hope they end up removing HDR from videos with HDR text.
Recording video in sunlight etc is OK, it can be sort of "normalized brightness" or something. But HDR text on top is terrible always.
What if they did HDR for audio? So an audio file can tell your speakers to output at 300% of the normal max volume, even more than what compression can do.
HDR audio already exists in the form of 24-bit and 32-bit floating point audio (vs. the previous 16-bit CD standard). Volumes are still mapped to the same levels because anything else doesn't make sense, just as SDR content can be mapped to HDR levels to achieve the same levels of brightness (but not the same dynamic range, as with audio).
Isn't that just by having generally low volume levels? I'm being pedantic, but audio already supports a kind of HDR like that. That said, I wonder if the "volume normalisation" tech that definitely Spotify, presumably other media apps / players / etc have, can be abused to think a song is really quiet.
This is one of the reasons I don't like HDR support "by default".
HDR is meant to be so much more intense, it should really be limited to things like immersive full-screen long-form-ish content. It's for movies, TV shows, etc.
It's not what I want for non-immersive videos you scroll through, ads, etc. I'd be happy if it were disabled by the OS whenever not in full screen mode. Unless you're building a video editor or something.
I would love to know who the hell thought adding "brighter than white" range to HDR was a good idea. Or, even worse, who the hell at Apple thought implementing that should happen by way of locking UI to the standard range. Even if you have a properly mastered HDR video (or image), and you've got your brightness set to where it doesn't hurt to look at, it still makes all the UI surrounding that image look grey. If I'm only supposed to watch HDR in fullscreen, where there's no surrounding UI, then maybe you should tone-map to SDR until I fullscreen the damn video?
Yup, totally agreed. I said the same thing in another comment -- HDR should be reserved only for full-screen stuff where you want to be immersed in it, like movies and TV shows.
Unless you're using a video editor or something, everything should just be SDR when it's within a user interface.
Sounds like they need something akin to audio volume normalization but for video. You can go bright, but only in moderation, otherwise your whole video gets dimmed down until the average is reasonable.
Every phone has it, it’s called “power save mode” on most devices and provides additional advantages like preventing apps from doing too much stuff in the background. =)
HDR has a slight purpose, but the way it was rolled out was so disrespectful that I just want it permanently gone everywhere. Even the rare times it's used in a non-abusive way, it can hurt your eyes or make things display weirdly.
I agree that HDR has been mostly misused, but on the other hand the difference between the color space sRGB and the wider-gamut rendering enabled by the Rec. 2020 encoding of the movie is extremely obvious for me (sRGB has a very bad red primary color, which forces the desaturation of the colors in the yellow-orange-red-purple sector, where the human eye is most sensitive to hues and where there are many objects with saturated colors, e.g. flowers, fruits, clothes, whose colors are distorted by sRGB).
Because I want the Rec. 2020 and 10-bit color encoding, I must also choose HDR, as these features are usually only available together, even if I do not get any serious advantage from HDR and HDR-encoded movies can usually be viewed well only in a room with no light or with dim light, otherwise most of them are too dark.
That's true on the web, as well; HDR images on web pages have this problem.
It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR". But you could probably catch the most egregious cases, like "every single pixel in the video has brightness above 80%".
Funnily enough HDR already has to detect this problem, because most HDR monitors literally do not have the power circuitry or cooling to deliver a complete white screen at maximum brightness.
My idea is: for each frame, grayscale the image, then count what percentage of the screen is above the standard white level. If more than 20% of the image is >SDR white level, then tone-map the whole video to the SDR white point.
> Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sure, in the same way that advertising should never work since people would just skip over a banner ad. In an ideal world, everyone would uniformly go "nope"; in our world, it's very much analogous to the https://en.wikipedia.org/wiki/Loudness_war .
sounds like every fad that came before it where it was over used by all of the people copying with no understanding of what it is or why. remember all of the HDR still images that pushed everything to look post-apocalyptic? remember all of the people pushing washed out videos because they didn't know how to grade the images recorded in log and it became a "thing"?
eventually, it'll wear itself out just like every other over use of the new
HDR videos on social media look terrible because the UI isn’t in HDR while the video isn’t. So you have this insanely bright video that more or less ignores your brightness settings, and then dim icons on top of it that almost look incomplete or fuzzy cause of their surroundings. It looks bizarre and terrible.
Imo the real solution is for luminance to scale appropriately even in HDR range, kinda like how gain map HDR images can. Scaled both with regards to the display's capabilities and the user/apps intents.
It's good if you have black text on white background, since your app can have good contrast without searing your eyes. People started switching to dark themes to avoid having their eyeballs seared monitors with the brightness high.
For things filmed with HDR in mind it's a benefit. Bummer things always get taken to the extreme.
I only use light themes for the most part, and HDR videos look insane and out of place. If you scroll past an HDR video on Instagram you have a, eyeball-searing section of your screen because your eyes aren't adjusted to looking at that brightness, and then once you scroll it off the screen and you have no HDR content, everything looks dim and muted because you just got flashbanged.
That does not sound enjoyable and seems like HDR abuse.
The "normal" video should aim to be moderately bright on average, the extra peak brightness is good for contrast in dark scenes.
Other comments comparing it to the loudness war ar apt. Some music streaming services are enfircing loudness normalization to solve this. Any brickwalled song gets played a bit quieter when the app is being a radio.
Instagram could enforce this too, but it seems unlikely unless it actually effects engagement.
Not sure how it works on Android, but it's such amateur UX on Apple's part.
99.9% of people expect HDR content to get capped / tone-mapped to their display's brightness setting.
That way, HDR content is just magically better. I think this is already how HDR works on non-HDR displays?
For the 0.01% of people who want something different, it should be a toggle.
Unfortunately I think this is either (A) amateur enshittification like with their keyboards 10 years ago, or (B) Apple specifically likes how it works since it forces you to see their "XDR tech" even though it's a horrible experience day to day.
99% of people have no clue what “HDR” and “tone-mapping” mean, but yes are probably weirded out by some videos being randomly way brighter than everything else
Whether this exact approach catches on or not, it's turning the corner from "teaching AIs to develop using tools that were designed for humans" to "inventing new tools and techniques that are designed specifically for AI use". This makes sense because AIs are not human; they have different strengths and limitations.
Absolutely. The limitations of AI (namely statelessness) require us to rethink our interfaces. It seems like there's going to be a new discipline of "UX for agents" or maybe even just Agent Experience or AX.
Software that has great AX will become significantly more useful in the same way that good UX has been critical.
The other day my friend ranted to me how he hates their company system for applying for PTO. Since he said he only uses it to apply for PTO, I was wondering if it really deserves that much wrath.
For me it's just incomplete. We used to have Successfactors and although the UI was less fancy, I have the feeling it was more complete and thorough.
After so many years with Workday I still cannot sync my calendar to outlook365, so I need to manually put the entries. A problem solved a million years ago in successfactors.
To capture the individual transistors on a modern CPU, you'd need an image tens of terabytes in size, and it'd have to be captured by an electron microscope, not an optical image. And even that wouldn't let you see all the layers. Some of the very old CPUs, I'm not sure what resolution would be required.
"I created the Alphanum Algorithm to solve this problem. The Alphanum Algorithm sorts strings containing a mix of letters and numbers. Given strings of mixed characters and numbers, it sorts the numbers in value order, while sorting the non-numbers in ASCII order. The end result is a natural sorting order."
There are many older instances of that, such as "versionsort" from various Linux tools and libraries. I think this has likely been independently recreated several times, with various subtle differences.
LLM's can shorten and maybe tend to if you just say "summarize this" but you can trivially ask them to do more. I asked for a summary of Jenson's post and then offer a reflection, GPT-5 said, "It's similar to the Plato’s Cave analogy: humans see shadows (the input text) and infer deeper reality (context, intent), while LLMs either just recite shadows (shorten) or imagine creatures behind them that aren’t there (hallucinate). The “hallucination” behavior is like adding “ghosts”—false constructs that feel real but aren’t grounded.
That ain't shortening because none of that was in his post.
That reflection seems totally off to me: fluent, and flavored with elements of the article, but also not really what the article is about and a pretty weird/tortured use of the elements of the allegory of the cave, like it doesn't seem anything like Plato's Cave to me. Ironically demonstrates the actual main gist of the article if you ask me.
But maybe you meant that you think that summary is good and not textually similar to that post so demonstrating something more sophisticated than "shortening".
Yes, GPT-5's response above was not shortening because there was nothing in the OP about Plato's Cave. I agree that Plato's cave analogy was confusing here. Here's a better one from GPT-5, which is deeply ironic:
A New Yorker book review often does the opposite of mere shortening. The reviewer:
* Places the book in a broader cultural, historical, or intellectual context.
* Brings in other works—sometimes reviewing two or three books together.
* Builds a thesis that connects them, so the review becomes a commentary on a whole idea-space, not just the book’s pages.
This is exactly the kind of externalized, integrative thinking Jenson says LLMs lack. The New Yorker style uses the book as a jumping-off point for an argument; an LLM “shortening” is more like reading only the blurbs and rephrasing them. In Jenson’s framing, a human summary—like a rich, multi-book New Yorker review—operates on multiple layers: it compresses, but also expands meaning by bringing in outside information and weaving a narrative. The LLM’s output is more like a stripped-down plot synopsis—it can sound polished, but it isn’t about anything beyond what’s already in the text.
Essentially, Jenson's complaint is "When I ask an LLM to 'summarize' it interprets that differently from how I think of the word 'summarize' and I shouldn't have to give it more than a one-word prompt because it should infer what I'm asking for."
I think exactly this. When someone is given the task of writing a book review for the New Yorker there is a (probably unstated) agreement that they won't simply summarize the contents, but weave it into an essay in the way the LLM proposed. You could definitely get a similar result from an LLM by giving a more suitable and verbose prompt such as "review these 3 titles together, talk about their shared themes and concepts in a way that is relevant to the contemporary audience" etc etc.
I don't think the Plato's Cave analogy is confusing, I think it's completely wrong. It's "not in the article" in the sense that it is literally not conceptually what the article is about and it's also not really what Plato's Cave is about either, just taking superficial bits of it and slotting things into it, making it doubly wrong.
This is a garbage summary and it really highlights the thrust of the article. My summary: “Human beings have a strong tendency to anthropomorphise and that leads to us granting LLMs human like qualities incorrectly.”
You can see where the LLM has gone wrong. It’s hooked in to “summary” and therefore given excessive emphasis to this part of the article. The plato’s cave analogy is stupid; and what on earth is it going on about with ghosts?
It’s not shortening, sure, it’s dribbling nonsense.