U-boot is scriptable, but it's awkward. You put snippets of shell-like commands into environment variables and then connect them all together. It's the most powerful bootloader I've ever worked with, but writing new stuff and debugging it is tedious.
“Pay my way or take the highway” is as close to the closed-source ethos as you can possibly get. Collaboration is not feasible if the barrier of entry is too high and those involved make no effort to foster a collaborative environment.
Yes, because with a closed source you can also just download the source and add your own patches and maintain your own fork... /s
Most open source projects have way more patches contributed than the core developers can handle, so they tend to only accept those from the friendliest contributors or with the highest code/documentation quality.
My understanding is that TPM is secure, and Win 11 still supports TPM. Am I mistaken and/or misunderstanding your statement that Microsoft is enforcing a hardware requirement with a known back door?
Here's a significantly more credible (stacksmashing) video that demonstrates how ineffective some TPM implementations are. If the TPM was integrated into the CPU die, this attack would likely not be possible. https://www.youtube.com/watch?v=wTl4vEednkQ
Despite the TPM being a pretty good and useful idea as a secure enclave for storing secrets, I'm concerned that giving companies the ability to perform attestation of your system's "integrity" will make the PC platform less open. We may be headed towards the same hellscape that we are currently experiencing with mobile devices.
Average folks aren't typically trying to run Linux or anything, so most people wouldn't even notice if secure boot became mandatory over night and you could only run Microsoft-signed kernels w/ remote attestation. Nobody noticed/intervened when the same thing happened to Android, and now you can't root your device or run custom firmware without crippling it and preventing the use of software that people expect to be able to use (i.e. banking apps, streaming services, gov apps, etc.).
Regardless, this is more of a social issue than a technical issue. Regulatory changes (lol) or mass revolt (also somewhat lol) would be effective in putting an end to this. The most realistic way would be average people boycotting companies that do this, but I highly doubt anyone normal will do that, so this may just be the hell we are doomed for unless smaller manufacturers step up to the plate to continue making open devices.
Sure let’s just centralize hardware attestation to Microsoft’s cloud tied to a Microsoft account with keys you can’t change what could possibly go wrong?
This is all publicly documented by Microsoft you just need to translate their doublespeak.
Google is doing does the exact same thing and people were sounding the alarms when they did it but Microsoft gets a pass?
Use ChaGPT to outsource your critical thinking for you because I’m not gonna do it.
I've looked into this fella before because he didn't pass the smell test. He's running a grift selling schlocky cell phones and cloud services. His videos are excessively clickbait-y and show minimal understanding of the actual tech, it's more or less concentrated disinformation and half-understood talking points. GrapheneOS devs also had something to say about him: https://discuss.grapheneos.org/d/20165-response-to-dishonest...
I've had to learn about TPMs to figure out if they're the right technology with which to integrate a product I've worked on. I don't agree that they're a "neo-clipper-chip" in any real way based on my exposure to them.
While I'm not a cryptographer... I never really understood the appeal of these things outside of one very well-defined threat model: namely, they're excellent if you're specifically trying to prevent someone from physically taking your hard drive, and only your hard drive, and walking out of a data centre, office, or home with it.
It also provides measured boot, and I won't downplay it, it's useful in many situations to have boot-time integrity attestation.
The technology's interesting, but as best as I can tell, it's limited through the problem of establishing a useful root-of-trust/root-of-crypt. In general:
- If you have resident code on a machine with a TPM, you can access TPM secrets with very few protections. This is typically the case for FDE keys assuming you've set your machine up for unattended boot-time disk decryption.
- You can protect the sealed data exported from a TPM, typically using a password (plus the PCR banks of a specific TPM), though the way that password is transmitted to the TPM is susceptible to bus sniffing for TPM variants which live outside the CPU. There's also the issue of securing that password, now, though. If you're in enterprise, maybe you have an HSM available to help you with that, in which case the root-of-crypt scheme you have is much more reasonable.
- The TPM does provide some niceties like a hardware RNG. I can't speak to the quality of the randomness, but as I understand it, it must pass NIST's benchmarks to be compliant with the ISO TPM spec.
What I really don't get is why this is useful for the average consumer. It doesn't meaningfully provide FDE in particular in a world where the TPM and storage may be soldered onto the same board (and thus impractical to steal as a standalone unit rather than with the TPM alongside it).
I certainly don't understand what meaningful protections it can provide to game anti-cheats (which I bring up since apparently Battlefield 6 requires a TPM regardless of the underlying Windows version). That's just silly.
Ultimately, I might be misunderstanding something about the TPM at a fundamental level. I'm not a layperson when it comes to computer security, but I'm certainly not a specialist when it comes to designing or working with TPMs, so maybe there's some glaring a-ha thing I've missed, but my takeaway is that it's a fine piece of hardware that does its job well, but its job seems too niche to be useful in many cases; its API isn't very clear (suffering, if anything, from over-documentation and over-specification), and it's less a silver bullet and more a footgun.
> I never really understood the appeal of these things outside of one very well-defined threat model: namely, they're excellent if you're specifically trying to prevent someone from physically taking your hard drive, and only your hard drive, and walking out of a data centre, office, or home with it.
So basically the same thing you'd get by having an internal USB port on the system board where you could plug a thumb drive to keep the FDE key on it?
> It also provides measured boot, and I won't downplay it, it's useful in many situations to have boot-time integrity attestation.
That's the nefarious part. You get adversarial corporations trying to insist that you run their malware in order to use their service, and it's giving them a means to attempt to verify it.
Which doesn't actually work against sophisticated attackers, so the security value against real attacks is none, but it works against normies which in turn subjects the normies to the malware instead of letting someone give them an alternative to it that doesn't screw them.
If I knew absolutely nothing about TPM other than the circumstances in which it was made (who, what, why, when) I would have predicted from that alone that it wouldn't benefit consumers, wouldn't be secure, and that it was motivated by business, not technology.
This is incorrect. Not all CPUs supported by Windows 10 supported the VBS feature.
Microsoft is making the VBS mandatory for OEMs, hence the CPU needs support, hence the ~7 year old minimum requirement for CPUs in what Microsoft supports for Windows.
Yes, you can disable it during setup as a workaround, but it's exactly that. And why you'd want to make your system less secure, well I'll leave that to the exercise of the reader when they'll turn around two weeks from now and complain about Windows security.
Most of the requirements for that feature are UEFI features or a TPM, and have nothing to do with the CPU
The actual CPU requirements are VMX, SLAT, IOMMU and being 64 bit, which have all been available on the Intel side at least, since at least 2008, with some coming available even before that.
The CPU requirement was just an attempt to force people to buy new hardware they didn't need. Nothing more.
A perfect example of this is the Ryzen 5 1600. Its not officially supported but meets every single one of the requirements and had no trouble enabling the feature in the run up to the release of Win11 (before it was blocked for no reason). I know this because I did it.
Also they marked all but one 7th Intel Core CPU as unsupported, and the one they did add just so happens to be the one they were shipping in one of their Surface products. No way you can tell me this list was based fact and not the whims of some random PM when they do stuff like that.
> and why you'd want to make your system less secure,
I'd offer that the likely goal here is the most usable system possible, working with what one has. If folks are here, there's usually a lot of necessity factors in play.
They might sell more Windows 11 if it ran on more hardware. How does this make them money?
It's worth asking, but I think there's an answer: they want the OS to be transformed into an interface to their cloud where recurring revenue is easier. To do that, they need to make it more like a mobile OS and more locked down. TPM helps this.
Dropping windows 10 support is a pretty big lever to apply pressure to get people to upgrade to 11. Oh turns out you also “need” to buy new hardware to run it.
Dropping windows 10 support is a really reasonable decision. The focus is on 11, it's been out for almost 5 years. I'm guessing they are close to releasing 12 at this point, maybe in a year or two. Supporting three entire fully fledged oses is quite alot of work. I also understand supporting newer hardware, they dropped 32bit on 11 and moved the instruction set up a bit. You gotta do a cutoff somewhere and I'm happy that they are at least allowing us to use the improved performance our modern CPUs have. I'm not happy with alot of stuff, but I get this at least.
I'd argue it's probably time to drop 32-bit x86 support, but the rest of this stuff is arbitrary and doesn't have any tangible benefit except conveniently providing hardware manufacturers with an excuse to unload new hardware onto people when there's nothing wrong with what they have. (not to mention, pardon the conspiracy theory, they're probably trying to use the TPM to turn the PC into a smartphone-like platform)
It's surprising that when we had Win7 they did that brief "XP Mode" experiment with some virtualized-penalty box.
Why didn't that go further? Presumably virtually any x86-64 box currently in circulation would be fast enough to run a VM running a full copy of 32-bit XP/Win7/Win10, or even a full carousel (or download store) of DOS and early-windows releases. It could be the most compatible Windows ever, solving the weird "64-bit systems can't run some 16-bit apps" gotcha and perhaps allowing some way to bridge in support for devices that can only be driven by old 32-bit XP drivers.
> They might sell more Windows 11 if it ran on more hardware. How does this make them money?
Given the free Win 7/8->10->11 upgrade path, almost every end user who'd want a Windows license probably already has one. This leaves enterprise licensing and computer manufacturers (laptops, mini-PCs, desktops), who wouldn't care about this because they'll have newer hardware anyway.
No they will make the same money either way because they are selling the OS, not the hardware. They are requiring only newer hardware to limit their surface of exploitation and reduce their compatibility list.
They also sell a license with the new hardware. The bulk majority of the public never buy hardware without an OS. So yes, they are making more money with each new hardware sale. Plus the increase of forced advertising means they make more per user, effectively double dipping.
Why do you feel the need to defend a convicted monopolist for engaging in user hostile behavior?
ZFS has been stable in FreeBSD for something like 17 years, and FreeBSD jails have been around for something like 25 years.
By the time Docker hit 1.0 (about 11 years ago), the use of snapshots and jails had already been normal parts of life in the FreeBSD space for over half of a decade.
I never find arguments like this compelling (but I agree with your sentiment). I don’t much care to fire up the time machine to go back 12 or more years to develop software today. If your argument is that ZFS and jails provide the same functionality but are more stable than Docker. But as is it comes off as “get off my lawn you young whipper snappers”.
But at the same time, the reason Docker won was not because it was groundbreaking tech or because it was amazingly well tested or anything. Just as one example, it has a years old bug which actively gets more comments every week having to do with Docker grossly mishandling quotes in env files.
No, the reason it won is because the development experience and the deploy experience is easy, especially when you are on Linux AND on macOS. I can’t run FreeBSD jails or ZFS on macOS, can I? Definitely not with one file and one command.
Jails and ZFS are amazing tech but they are not accessible. Docker made simple things very simple while being cross-platform enough. Do I feel gross using it? Yeah. It’s a kludgy solution to the problem. But it gets the job done and is supported by every provider out there. I am excited that it is being ported to FreeBSD though I know it will be a very long process.
> especially when you are on Linux AND on macOS. I can’t run FreeBSD jails or ZFS on macOS, can I? Definitely not with one file and one command.
On macOS, docker actually launches a Linux VM to run containers. If this counts, then yes, you can run FreeBSD jails or zfs on macOS, by running a FreeBSD VM.
But it works with one command and one (Docker)file. That’s what I mean by Docker being a kludgy solution: this is way less than ideal. But for developer experience this is very nice. And that same Dockerfile runs on everything from AWS, to GCP, to k8s, to Dokku, etc.
I dislike the implementation but I cannot deny that the UX is good enough to be very popular.
Yeah I would love to use FreeBSD jails with ZFS and everything, it’s just that the whole cloud and containerization thing happened based on Linux and FreeBSD just never made it into that ecosystem.
You’ll be sacrificing a lot and have to hand-roll a lot if you want your organization to switch from Linux+docker to FreeBSD+jails
It's all just history now for all I know, but there was work in the past to make Linux containers work on a Solaris fork (SmartOS, specifically) by emulating the Linux syscall table and presenting that to the containers. Joyent did work on this (alas, and there's an excellent and entertaining talk from Bryan Cantrill[1] that goes over it.
I imagine FreeBSD could do something similar if they aren't already. IIRC FreeBSD has a Linux emulation layer (but I don't know how much attention it still gets), and it's had containerization primitives longer than linux, so some amount of filling in the gaps in containerization features and syscall tables (if needed) could possibly yield an OCI compatibility later (for all I know all this already exists).
The problem, and the reason if this doesn't exist why people probably weren't as interested in doing the work, is it would always be "mostly" compatible and working and there would be no guarantee that the underlying software wouldn't exhibit bugs or weird behavior from small behavior differences in the platform when emulating something else. Why open yourself up to the headache when you can just run Linux with containers or build what you want on FreeBSD with jails and their own native containerization primitives.
Some kernels are more similar to others, some are less. Turns out NT is less similar to Linux than required for good performance. I wouldn’t be surprised if Solaris was similar enough given that Linux tries to be Unix-like and Solaris is actually Unix.
In my opinion this is the path forward. I can already imagine Hashicorp Nomad orchestrator, with the podman driver, running fleets of FreeBSD containers.
4M requests/day is ~46 requests/second , for content, that could be cached a lot. Even if you have spikes that are 100x bigger than the average, that would be 4600r/s which does not seem like much in 2025.
I think there's FreeBSD images for all the clouds now.
You would need to do more work yourself to fetch and run jails probably, and I don't know if there's a hosted repository of 'jail images', but in return, you'd probably have a nicer system (at least, I'd like such a system more than running containers on google container optimized linux)
You can always upload your own, it's pretty simple doing so in a reproducible manner using something like Packer, but even without it you can just boot a VM into a rescue system, write your OS of choice to the VM disk and reboot.
Docker was the first viable containerization technology on Linux. Despite the 15 year late start vs FreeBSD Jails, it's certainly winning by the numbers.
But that has nothing to do with their respective UXs. It's a Linux vs FreeBSD signal.
> Docker was the first viable containerization technology on Linux.
No it wasn’t. Docker was late to the party even for Linux (and Linux was late compared to every other “UNIX”).
OpenVZ was around for years before docker. Its main issue was that it required out-of-tree kernel code. But there were some distributions that did still ship OpenVZ support. In fact it’s what Proxmox originally used. And it worked very well.
Then LXC came along. Fun fact, Docker was originally used LXC itself. But obviously that was a long time ago too.
I’ve used both OpenVZ and LXC in production systems before Docker came along. But I’ve always preferred FreeBSD Jails + ZFS to anything Linux has offered.
Not really, OpenVZ was/is really quite good, it was just hampered by the fact it requires out of tree modules. LXC has also always been very usable (Docker even used it for several years) but it was IMHO too focused on the VM-like management scheme that Zones and Jails had.
Docker's killer selling point was that it solved a very common and specific developer problem, not that it provided operational improvements over the state of the art on Linux. From an operational perspective, Docker has generally been a downgrade compared to LXC. (I say this as a maintainer of runc, the container runtime that underpins Docker, and as someone who worked a lot on Docker back in the early days and somewhat less today.)
One of Docker's big advantages is its client/server architecture made it easier to run on all those macOS and Windows boxes by just spinning up a VM for the server side, then making that VM fully managed to where a large percent of users don't even know it's there.
I think it also helps that Docker started with an enormous amount of VC funding because their promise was to bring the “App Store” to Linux and enterprise servers.
Who couldn’t become famous with something like a $200M budget?
Feel like they spent it on marketing instead.
Podman is arguably technically superior yet people stay with Docker out of habit…
Docker engine is one thing, but access to docker hub without rate limits is what people actually pay for if they’re too cheap to host their own proxy registry (which everyone except the smallest companies should regardless).
You can't use Docker on Mac or FreeBSD. Now, I am not calling you a liar, you _can_ use Docker on Mac and FreeBSD. But you would probably only want to do so for development, as Docker on Mac and FreeBSD requires running a Linux VM which is the thing which _actually_ runs the containers.
There is work ongoing to try to make this more native on FreeBSD (by using Linux jails) but that work is not complete yet.
So, if you want to get the same kind of experience as Docker on FreeBSD, you are forced to use jails.
The only reason Docker seems accessible is because it's native to the platform people seem to like for running all their services, but if you're dealing with FreeBSD, you most certainly would not just "use Docker" to deploy your stuff. Because you would get worse performance than if you had just used Linux.
So the answer to "Isn't this just Docker with extra steps?" is truly and absolutely "No". Not because of some kind of old man shouting at cloud argument, but because if you are on FreeBSD (for whatever reason that might be) you can't just use Docker as an easier replacement for Jails (at least right now).
I have to imagine systemd’s nspawn with btrfs integration took much inspiration. Combined with systemd’s service configuration it really makes a wonderful way of running distroless, immutable containers.
Because it is relatively expensive, totally unnecessary and decadent and probably doesn't do a particularly good job (as people have admitted in their replies to me).
Additionally much like people ubering a McDonalds when the drive through is less than a 2 minute drive away. It actually causes additional headaches (food is more likely to come col and/or incorrect) and complications that don't exist with simply just spending a few minutes not being lazy is actually easier.
It's not the same as a full vacuum run. But it's god as what they are designed to do. Clean a bit every single day.
All the crumbs that fall down in the kitchen over a day, don't get chance to get stamped into the floor. Noticeable less dust buildup on top of counters. I come home and it's done. Mental load removed.
It's neat. And you can get them from 80 EUR. Even if they only last 5 years, that's 16 EUR per year, but saves you maybe 8h per year. Maybe it's because I live in a relative rich country, but here that is not decadent. People buy cars for 50 000 EUR :3
If getting a small vacuum out quickly is a big mental load, I dunno what to say to that. It all seems like it isn't necessary.
It is like having a smart fridge or something that produce ice-cubes for me and loads of other stupid kitchen gadgets. I didn't feel the need to have a robot vacuum cleaner in the past and I don't feel the need to have one now. Especially with all the iffy spying stuff that it might be doing.
Also any of these things that is less than 100 euros is likely to be crap. I just got rid of a lot of old electronics tat.
The cheaper ones are great, because they don't connect to an app or wifi. Mine just has a remote with a timer. Like I wrote you, mine has been going for 6 or 7 years.
I'm not trying to convince you to buy one, I'm trying to explain why you have one. Because YOU said that you don't understand it. I'm trying to explain my needs. No need to shame me.
Of all the household items i have, the robot vacuum I would certainly buy again.
Which one is that? I want one without cloud and valetudo seems like a pain. Buying an 800 dollar vacuum only to risk bricking it right away is scary. I'd buy a simple one for $80 right away though.