Quoting TFA: "It’s worth noting that none of this was really legal; the law technically stated that TikTok shouldn’t have been allowed to exist for much of this year. Everyone just looked the other way while Trump and his cronies repeatedly ignored deadlines and hammered away at the transfer."
Google TV showing ads on the home screen convinced me to buy an Apple TV. Had to go back to a set-top box - to use the same apps I have built into the TV - just because Apple won't spam me with this shit.
It's a good tip, I may end up doing that. I'm hoping that Apple has better standards for third party streaming apps (just yesterday I made a Ask HN post about how terrible they are)
I have them and like them. I don't wear them constantly, but on days when I'm doing something interesting, they help me document much more than I otherwise would.
To TFA's point - "Bars" are relative and relatively meaningless - [SS]RSRP, RSRQ and SINR are your real numeric signal strength / quality measurements.
Not sure about Apple, but on Android, individual carriers can set the number-to-bars thresholds. Two otherwise-identical signals could be represented as a different number of bars depending on your particular carrier: https://source.android.com/docs/core/connect/signal-strength
I'm a big fan of the MWCBTY keyboard format, it's especially efficient when you have to type a lot of G's.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
I’m not going crazy, right, nearly nobody aside from professional writers used em dashes prior to 2022. And the whole bolded topic intro, colon, short 1-2 sentence explanation seems way more like a product of GPT formatting than how organic humans would structure it?
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
I made a conscious effort to switch from hyphens to em dashes in the 2010's and now find myself undoing that effort because of these things, so I try not to instantly assume "AI". But look long enough and you do notice a "sameness": excellent grammar, fondness for bulleted lists, telltale phrases like "That's not ___, it's ___."
And a certain vacuousness. TFA is over 16000 words and I'm not really sure there's a single core point.
But to this frequency? (Note: I tried to find a study on the frequency of em dash use between GPT and em-dash prolific human authors, and failed.)
The article has on average, about one em dash per paragraph. And “paragraph” is generous given they’re 2-3 sentences in this article.
I read a lot, and I don’t recall any authors I’ve personally read using an em dash so frequently. There would be like 3 per page in the average book if human writers used them like GPT does.
Mostly agree, however this kind of quirk could issue entirely from post-training, where the preferences/habits of a tiny number of people (relative to the main training corpus) can have outsize influence of the style of the model's output. See also the "delve" phenomenon.
The entire blog is full of characteristic LLM styles: The faux structure on top of rambling style, the unnecessary and forced bullet point comparisons with equal numbers of bullets, the retreading of the same concept in different words section after section.
The rest of the blog has even more obvious AI output, such as the “recursive protocol” posts and writing about reality and consciousness. This is the classic output you get (especially use of ‘recursive’) when you try to get ChatGPT to write something that feels profound.
I agree. Good core idea, but it feels quite stretched.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
It's not black and white. My SO doomscrolls facebook on her laptop for hours daily. Certain parts of creative workflows are better on phones (or other devices) than laptops - the article acknowledges this in the "hybrid workflows" section.
IMO the important thing to be mindful of is your creation-vs-consumption balance. We tend to overindex on consumption.
Blender, Godot, Audacity, Firefox, Git, Linux, ... I could name 100 projects that could not be described that way. Most couldn't. There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
* Handbrake, wraps ffmpeg (it does more stuff but that's the main thing most people use it for)
>There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
it's more common than you give it credit for.
gparted, cups, 7zip, baobab, all the *commnder file tools, almost all cd/dvd/bd burning software, nearly every media player that touches ffmpeg (vlc,mplayer) , almost every VM gui, almost every firewall GUI, time machine, duplicati, sabnzbd .. the list goes on forever.
Linux fits too if you're talking about the OS rather than the kernel.
If you want to talk at a lower level then python is really just a wrapper for lots of other shit, simiarly pytorch/cuda are wrappers for a bunch of ugly C.
pretty languages are wrappers for ugly languages, ugly languages are wrappers for assembly, assembly is a wrapper for machine code.
I agree that libraries are a thing, all problems in CS can be solved with a layer of indirection, etc. I also have no issue with AI-gen projects if they're good.
In this case, they posted a README full of nonsense diagrams, didn't fix the broken characters in their UX, and breezed over the complexity of the dependencies (ESP-CSI is very cool but requires specific hardware, with two ESP devices and external antennas). Feels sloppy.