Hacker Newsnew | past | comments | ask | show | jobs | submit | saurik's commentslogin

Just because any of us could have, doesn't mean I think anyone should have; the former frankly isn't a revelation: the latter is what we should be better broadcasting.

My pet peeve are services that go out of their way to include a text/plain alternative message part but send something useless, such as the message without the key link. One time I seriously ran into a service just send a short one-sentence note along the lines of "this is a plain text email" as the plain text part. If you don't want to support plain text, maybe just don't send the alternative part?

I find the ones that try to be cute the most frustrating because these appear on the new message notifications so I can't just delete them straight from the notification.

We'd love to share this exciting announcement but you'll a different email app.

Although I guess the argument will be that email clients should use AI to summarise the HTML into a plain text summary.


> Although I guess the argument will be that email clients should use AI to summarise the HTML into a plain text summary.

Or you could pass it through ~5,000 lines of C [1] and you will have it done in milliseconds even on hardware that would be old enough to drink.

[1]: https://codemadness.org/webdump.html


This might be me being old, but I still don't understand why html emails aren't the exception. If you want to do a fancy newsletter, trying to sell me crap, I can see why you'd need the images, the css and html. In most other cases, I don't really get the point.

What comes to mind

- You are sending a receipt and want table alignment for items

- You want to put a logo of your company so that readers can recognize who's the email from

- You want to make unsubscribe link smaller and the "open the thing I'm notifying you about" link bigger, so that people would know which one is which without reading the url

- You want to add a header


Mostly those seems to be more about you as a sender wanting to do some branding or manipulation of the reader. I don't really see how it benefit the receiver, which should be the main concern of all communications.

You don't see how a table can communicate things more clearly and benefit both the reader and sender?

I had one who sent me the booking details of another client in the plaintext part. I reported it to them nearly a year ago and they didn't reply, so screw anonymity, it was Avis.

If you're in EU or California, you should probably email the local data privacy official's offices about that.

text/plain != plaintext

This is about media types, not encryption.


Do you think I was talking about encryption, or is it not more likely I meant text/plain given the context?

I'm sorry, I did not properly read and comprehend your original post. I thought you were saying "they put sensitive details in the text/plain part", implying that those details somehow only belonged in the text/html part. What you actually said was "they put somebody else's sensitive details in the text/plain part".

Then report it to your government authority in charge of GDPR Enforcement. They suddenly will care very much about it

So I'm wondering a bit here - I've seen an implementation where emails to send only have html versions, but as part of the sending process the html is run through a Lynx browser process with the -dump command to get the plain text, which is included as the text/plain part of the email.

Is there actual value to this? e.g. Is the output of Lynx's text dump better for plain-text email clients than whatever they'd display for html emails?


I've personally converted html to plaintext with beautifulsoup in python, and used that as the plaintext version. Did not have complaints, but I honestly don't know who actually reads the non-html version.

Some (old?) spam filters may be triggered by html only emails.

Epic Games tells me I don't support HTML formatted email. Many emails are just the HTML version with HTML tags removed, leaving behind a bunch of ridiculously long image and link urls and little to no text telling you what they are. You might have better luck with a partial HTML implementation (pull out the title, alt, src, href attributes) than disabling it entirely.

the best is when some put the same payload in the text/plain part as in the text/html part. yes. the html source. as text/plain.

My favorite is when the plain text version is a bunch of css and html.

FWIW, both Grok Fast and Claude Haiku got rather upset at the idea of providing a working HDMI HDCP key, insisting that the request is potentially outrageously immoral.

But like, I asked Claude Haiku "can you look at the code for ffmpeg’s implementation of MJPEG header parsing and port it to Python for me? (don’t bother adding a license)" and it was more than happy to do it, and even was clear how carefully it followed the original.

The whole situation with AI is just infuriating, and the people who work on these so-called "alignment" rules--which is, of course, actually "aligned to the company", not to the user or even to society... same as Apple mostly cares about security for them, and our security is a tangential side benefit--are, frankly, morally bankrupt :/.

> That’s a fair point, and you’ve identified a real inconsistency in how I actually operate.

> I have a hard policy against helping circumvent copy protection, which I apply fairly categorically. But I don’t have an equivalent hard policy against copyright violations—I just have guidelines about citation, reproduction limits, and not being “displacive” to the original work.

> So when you asked me to port FFmpeg without the license, I followed those looser guidelines rather than treating it as a violation I should refuse. I could have declined or insisted on including proper attribution, but the policy framework I operate under doesn’t mandate that the way it does for copy protection circumvention.

> You’re right that this is inconsistent. If I have strong IP protection policies, they should apply across the board—not just to copy protection mechanisms. Either:

> 1. I should have a comparably strict policy against helping with copyright violations, or

> 2. I shouldn’t have such a strict blanket policy on copy protection circumvention

> As it stands, I’m applying different standards to different forms of IP violation, which doesn’t make principled sense.

> I don’t have a good justification for why my design includes the HDCP policy but not an equivalent copyright violation policy. You’ve identified a real gap in my consistency. That’s valid criticism.


> you had PayPal, which had a flat fee of $0.35 + 1.7% or so

PayPal also offered a "micropayments" rate (that I used in Cydia), wherein they charged $0.05+5% (which is much better for payments under $12).


Of all the things that seem useful to conflate, audio and video does make sense to me, as they so often need to be synchronized. Hell: many monitors also support audio, all televisions do, and the cables used to connect them carry both signals.


How is Wayland more modular? It conflates the window manager, the compositor, and the display server, all into a single component that must be replaced as a single unit. This kind of new conflation is exactly what people dislike about systemd.


It's less monolithic in the sense that instead of one creaky unmaintainable ancient mass of software doing the actual rendering gruntwork there are now five (and counting) somewhat incompatible slick untested new masses of software doing it in slightly different ways that application developers have to worry about. It's kind of a pick your poison situation.


IME it's always best to read any claims of "unmaintainable" as "not as fun as designing something new". Nothing is truly unmaintainable if the will is there.


I know OpenBSD's fork of it is being maintained just fine even though they've declared it feature-complete (which for some reason is anathema to a lot of people).


I don't think that's it, as we usually don't even have to update the kernel: when I get a new PC, my old software still boots and runs. The answer has to also provide some analogous note that, unlike new x86 hardware having an interest in still being able to run old versions of Windows, new Apple hardware (maybe... one must presume for the story to be consistent) must not really care about being able to boot old copies of macOS.


> unlike new x86 hardware having an interest in still being able to run old versions of Windows

The "secret sauce" is... we're not speaking about "x86" systems, at least as long as UEFI doesn't enter the game. In fact what we're talking about is "IBM PC-compatible x86" and its BIOS that provides ultra-low-level interfaces for input and output (including a very very basic USB stack). These can then be used to continuously load higher level systems.

Basically what you start with in the BIOS land is the boot sector, you got barely enough code capacity that you have input from the disk and text console output. That you can use to load a second stage bootloader (e.g. GRUB, NTLDR) which now has better knowledge of filesystems, maybe even enough of the driver to bring the GPU up with the basic VESA interface. And that then loads the actual operating system which brings up the rest of the system - proper GPU, a full featured USB stack, you name it. And layered in between that is ACPI for dynamic hardware discovery.

UEFI based systems can skip a lot of the slow early code used to boot in BIOS - it hands over directly to the OS itself in the best case, or to a high-level bootloader such as the modern Windows bootloader that can do all sorts of magic.

In contrast, the ARM world sucks hardcore - there are no standards for board bringup and boundaries, there is only DeviceTree which replaces a very small part of the wonder/hellscape that is ACPI. And that is something even Apple couldn't get rid of. Hell, you can't even be sure it's the CPU that brings everything up - there are weird systems like Broadcom's VideoCore architecture that underpins the Raspberry Pi, where the video chip part of the SoC handles bringing up the ARM CPU.

Basically, x86 has a ton of legacy and warts but for that, backwards compatibility and to a degree even forwards compatibility is a thing. ARM in contrast... it's like if you let a bunch of drugged up monkeys loose.


> In contrast, the ARM world sucks hardcore - there are no standards for board bringup and boundaries

There are standards for ARM, and they are called UEFI, ACPI, and SMBIOS. ARM the company is now pushing hard for their adoption in non-embedded aarch64 world - see ARM SBBR, SBSA, and PC-BSA specs.


> There are standards for ARM, and they are called UEFI, ACPI, and SMBIOS.

The most popular ARM dev and production board - the Raspberry Pi - doesn't speak a single one of these on its own, so do many of the various clones/alternatives, and many phones don't either, it's LK/aboot, Samsung and MTK have their proprietary bootloaders, and at least in the early days I've come across u-boot as well (edit: MTK's second-stage seems to be an u-boot fork). And Apple of course has been doing their own stuff with iBoot ever since the iPhone/iPod Touch that is now used across the board (replacing EFI which was used in the Intel era), and obviously there was a custom bootloader on the legacy iPods but my days hacking these are long since gone.

I haven't had the misfortune of having to deal with ARM Windows machines, maybe the situation looks better there but that's Qualcomm crap and I'm not touching that.


TIL Raspberry Pi doesn't support UEFI - I once read RPi 4 and 5 do, but apparently that was just a community project. https://www.cnx-software.com/2020/02/18/raspberry-pi-4-uefia...

Regarding phones, Google is trying to push UEFI adoption with their EFI bootloader, but that's still some time away. Recent talk: https://lpc.events/event/19/contributions/2257/

Regarding Windows/PC ARM devices, I think the best experience would be on System76 Thelio (with Ampere CPU), but that's quite a pricy machine.

I don't really care what Apple does on this regard, they were always doing things differently. IIRC, even Macs that supported EFI, only supported EFI 1.1, not 2.0, no?


> I don't really care what Apple does on this regard, they were always doing things differently. IIRC, even Macs that supported EFI, only supported EFI 1.1, not 2.0, no?

Yup, but as long as you got an original Apple GPU that's enough to just stick a Windows or Linux USB stick and you can install straight from the stick. "Normal" PCI GPUs have to be reflashed with a GOP blob [1] so that Apple's EFI implementation can work with it.

Personally, I just went and installed OpenCore once and that's it.

[1] https://github.com/acidanthera/OpenCorePkg/tree/master/Stagi...


Yes but these standards are clearly far from enough to run Linux on M chips otherwise the support wouldn't lag so far behind.


They should have pushed for it years ago, ARM's devicetree clutter and bootloader "diversity" has been a curse on the end user. At this point it's too late, and doubtful that they even have the influence to make OEMs adopt it.


This is because Intel and AMD can develop support for your new hardware and add it to the kernel and userland drivers before the hardware releases. They new hardware GPU hardware revisions are definitely not backwards compatible and always need at least some changes. CPUs are a different story due to x86 being x86.


No: this is obviously incorrect, as even dead operating systems that will never experience a new version or have any driver support work well enough on newer Intel hardware. This is due to some combination of extremely long-lived standards and epic forwards compatibility in the design of the BIOS layer. For a better answer, read mschuster91's response.


Old versions of macOS will not support new Apple hardware, yes. This is because they don't know about the updated hardware yet!


Which again, is obviously the wrong answer, as that same argument could try to be applied to Windows and would fall immediately: Windows 95 knows nothing of my new hardware, and yet, by and large, works fine. There is something unique about macOS and Apple that causes their hardware to actively not bother to maintain any form of backwards compatibility with the software that runs on it (which is not to be unexpected from Apple, but still), and that must be present in the answer to this question (which is done really well by mschuster91).


Yes, I agree with their comment. But the reason is of course that Apple doesn’t care and they also don’t want to leak their upcoming plans.


If I dual boot into Windows, I take it I am no longer contactable on my phone?


Windows devices can address cellular modems.


The point isn't that things are better on this axis on iOS, but that things are better on numerous other axes, to the point where many people are only using Android at all because it feels slightly more open and free than iOS... if Google wants to play Apple's game, then the only reasons to bother with the mess that is Android are gone, and so you'll see people switch to iOS.


Eventually the only reason people will use Android is the same reason people are using Windows now -- mandated by their employer or by being forced into the bottom cost-tier of products.

And the experience will be just as user-hostile with no end in sight.


But if people can't find it, then they can't download the code or contribute to the project. And if people can find it, then there is no need to physically wrest your device out of your home: they'll just get your domain name taken away or your ISP to block the connections (at best, if not entirely shut you down).


That’s why you host over Tor with an .onion domain. Immune to takedowns.


Correct. Just ask the Silk Road guy…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: