Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mashing Enter to bypass full disk encryption with TPM, Clevis dracut and systemd (pulsesecurity.co.nz)
188 points by Bender on Sept 1, 2023 | hide | past | favorite | 127 comments


Meh, it's an interesting exploit for sure (the USB key simulating Enter key being pressed every 15ms is cute).

But IMO unless you run a very hardened setup, protecting against evil maid attacks (wherein the attacker has physical access to your machine in its entirety) is really hard, and possibly always will be. In a hardened setup you lock down the emergency initrd shell - either it's not allowed at all, or it uses a password which hopefully is a little more secure against an attacker pressing the Enter key a lot ;)

The real eye opener for me is what Ventoy can do. You can plug it into a computer with Secure Boot enabled, and it will give you a nice user-friendly way to just ... completely and trivially bypass Secure Boot protection. Yes, really: https://www.ventoy.net/en/doc_secure.html

It won't work for every mobo/firmware combo. It worked first try for me on some used Lenovo ThinkCentre M710q I bought on eBay, though. Even with the latest June 2023 BIOS installed.

Ventoy does this by using a known exploit in a GRUB shim that is signed with the official Microsoft certs (that ones that are embedded in virtually every mobo sold with Secure Boot support), in order to pop the KeyEnroll UEFI application and then enroll its own keys in there. Or something like that.

Sure, M$ put out a windows update ages ago that updates the DBX to prevent this particular signed shim with known vulnerabilities: https://support.microsoft.com/en-us/topic/kb5012170-security... ... but who knows how many more are out there in the wild?

Once you've pwned Secure Boot (and I'm making the case here that any script kiddy can do this on a vast majority of commonly available/used mobos, let alone a professional cybercrime gang/local LE division or, god-forbid, 3 letter agency), then you can simply pop in your favourite (Arch, obviously) Linux live USB and `cryptsetup open` your favourite encrypted drive in seconds.

For unattended machines, especially if you've already bothered to stick a Clevis in there, you really ought to blend your TPM ping with a SSS (min 2) one-two combo-punch of a Tang pin.

But again, if the attacker is literally sitting at your desk typing in your computer connected to your trusted networks and such, you're always gonna be pretty screwed.


You raise a valid point, but this attack is particularly alarming because it circumvents a TPM, which is a hardware module that was specifically designed to protect against evil-maid class of attacks, e.g. every iPhone after the 5S uses its Secure Enclave (a TPM) to enforce time delays between passcode attempts [0], so that a physical attacker can't brute force the passcode. (Incidentally, the attack described in the OP is reminiscent of some early attacks against the iPhone that used an HID device to circumvent passcode rate limiting.)

IMO, the continuing difficulty that the FBI has breaking into seized iPhones is proof that it's possible to ship a device with hardware that is sufficiently secure to mitigate against any physical exfiltration of data, but this is probably only possible because of the fully vertical integration of Apple hardware, firmware, and software (and even then - it's still a cat and mouse game, bugs will eventually be exposed, etc.). For open source software that runs on any arbitrary assemblage of hardware components, I suspect it's nigh impossible to achieve this same level of physical security.

[0] https://support.apple.com/en-gb/guide/security/sec20230a10d/...


No, this explot has nothing to do with the TPM. The problem is tricking the initrd in giving you a shell, at which point you are able to run arbitrary "unprivileged" code (i.e your shell commands) and tell the TPM to do whatever you want.

Gaining a shell using the technique in this article is functionally equivalent to the bypass Secure Boot -> boot live Linux scenario I described in parent comment.

The article notes that Windows "avoids" this issue by measuring into PCR11 as early as possible, but it stands to reason that any kind of exploit that is found in the windows boot flow that happens before this moment would be the same issue as the described Clevis+initrd issue.


The exploit has everything to do with the TPM. It sounds like you don't understand why that is.

The TPM measures everything that you feed it. The firmware feeds it whatever binaries it boots, and those binaries may in turn feed it other data (configuration, chainloaded kernels, initrd, etc). Just booting a live disk on a TPM-encrypted computer won't be sufficient to access the data, as the loaded binaries/kernel arguments/etc will be different, so the TPM measurements will b e different. As such, it won't unlock.

The problem here is that the TPM doesn't measure things running in Linux (at least, not by default), so it doesn't know the difference between "expected initrd booting correctly" and "expected initrd being exploited".


I was ignorant about the argument here, so I went back and read the article. I’m going to have to side with the other guy. The TPM is not involved nor does it work quite as you’re describing (at least in the configuration outlined).

What the article is describing is a normally headless setup (eg server farm) where the TPM owns responsibility for providing the password for unlocking the encrypted root volume. This has nothing to do with software integrity which I think is what you’re describing.

The specific bug here is that the TPM agent and the interactive agent are running at the same time for systemd and the exploit is to get the interactive agent to trigger enough failures before the TPM agent does that you interrupt the normal boot flow. At which point systemd drops you into a recovery shell, you ask the TPM to unlock, and you can now mount the disk manually without having entered any password.

The only reason Windows doesn’t have a problem is they ask the TPM to unlock long before they ever allow for any interactivity. Measuring “initrd is behaving correctly” is not in the purview of a TPM except for “is the executable I’m loading legit” as part of a chain of trust (which again, while valuable and good and important to discuss, is completely separate from this article).


It doesn't really bypass the TPM though. These machines configure the TPM without a pin so they can boot + decrypt the drive in an unattended mode.

I'd trust this setup to prevent drives that were removed from servers from being decrypted (like for recycling), but it's basically useless against any other class of attack.

It's good to see publicized exploits of it though; that way people can link to this blog post if a pointy-haired-boss tries to misapply this class of disk encryption.


> this attack is particularly alarming because it circumvents a TPM,

FWIW, I think this phrasing is a little confusing. The TPM wasn't compromised at all. What happened is that the attack trips systemd into a fatal error path where it gives up and just drops the user at a root shell (!!!). From there you can simply command the TPM to do its thing and cough up the key, and it will trust you because you're the presumably-authenticated boot firmware.


It seems to me that the FBI can break into iPhones now, but they want to pretend that they can’t. The FBI strategy appears to be to fake the lack of capability to break in now, in order to be able to claim that there is a crisis and get law or legal precedent in place to allow much broader access, against potentially much better future technological challenges to break-in.

I believe that, from a technical perspective, my argument is supported by this article [0] that I found posted on the ACLU website. This was a reaction to the FBI’s highly-publicized case against Apple in 2015 where the FBI, I believe, pretended that they could not break into the iPhone that had belonged to one of the shooters in the December 2015 San Bernardino attack.

The article I have linked describes a specific technical workaround for avoiding permanent deletion of data, namely, avoiding deletion in “Effaceable Storage” of a key that encrypts some filesystem data. This may or may not continue to be relevant today. A pop-up on an Apple page explaining iPhone privacy [1] specifically states that Effaceable Storage “doesn’t provide protection if an attacker has physical possession of a device”, but I don’t claim to actually understand the security implications of the statements made on that page.

0. https://www.aclu.org/news/privacy-technology/one-fbis-major-...

1. https://support.apple.com/guide/security/data-protection-sec...


In addition to the inaccuracies others have mentioned, the iPhone secure enclave is not a TPM. Saying it is isn't a simplification, it's muddying the waters.


Can you explain the difference a bit more, and why it's more than just splitting hairs? Is it because TPM is a specific standard for implementation of a secure co-processor, whereas Secure Enclave is part of a larger SoC (T2), which while it could be considered an implementation of a secure co-processor, is distinct from the TPM standard?

In researching this comment I learned that the T2 chip actually runs its own operating system (bridgeOS), so I can see why you'd call it an oversimplification (or muddying the waters) to equate it to a TPM.


The TPM is an TCG standard (also ISO/IEC 11889), and specifies commands that support a lot of use cases. The Secure Enclave from Apple is something simpler and more restrictive, that is tailored only for Apple's use cases.


simpler? hardly.

high level overview: https://support.apple.com/guide/security/secure-enclave-sec5...

low level overiew: https://www.blackhat.com/docs/us-16/materials/us-16-Mandt-De...

AFAIK, and it's hard to find the info since I don't have it handy, so I don't want to devote the searching time, but bringing it back on-topic, all disk i/o has to go through the secure enclave for encryption and decryption. i believe this is better documented on mac than iphone. the data storage is always encrypted since T2. If filevault is enabled, then the user's password gets mixed in with the T2 hardware keys. Because those keys can never leave T2, all disk i/o necessarily goes through it.

This is vastly different than how TPM operates.



>But IMO unless you run a very hardened setup, protecting against evil maid attacks (wherein the attacker has physical access to your machine in its entirety) is really hard, and possibly always will be.

Can you describe an evil maid attack on an encrypted disk that gets unlocked by the user's entering a passphrase?


If you've got a user who can enter a long passphrase, you don't need a TPM, so this TPM bypass is moot.

The evil maid attack on an encrypted disk with a passphrase is that the attacker installs a hardware keylogger, then comes back the next day and snatches your laptop, which the keylogger tells them the password for.

If you've got highly sensitive chassis intrusion detection that wipes your secrets at the drop of a hat, the evil maid triggers it in a way that looks like a false alarm, and uses a hidden camera in the ceiling to watch your recovery procedure.

And if you would never be so naive as to have a recovery procedure, the attack is to laugh their ass off because you needed that laptop to give the conference talk / client demo you came to this city for, and it also had your boarding pass on it, and now you're fired and you're stranded.


> If you've got a user who can enter a long passphrase, you don't need a TPM, so this TPM bypass is moot.

That's a weird dismissal.

This attack is a big deal because it provides a workaround to full desk encryption.

I have no idea why so many people in this thread are trying to pretend it's not a big deal. This is a big deal. Full disk encryption is supposed to make sure your data is safe if the laptop is stolen. This attack makes your data vulnerable if your laptop is stolen.


Linux's full disk encryption (FDE) is probably good at discouraging evil-maid attacks if used with a passphrase that the user enters every boot.

But you know how Linux is: anyone can add any feature, no matter how bad of an idea it is. What is probably going on is that Linux's secure-boot support is not good enough to allow secure TPM-based FDE, but someone implemented it anyway. (Or it is good enough if you use a unified kernel image, but most Linux installs do not do that: https://wiki.archlinux.org/title/Unified_kernel_image)

Fedora's installer offers me the chance to turn on full disk encryption, but it does not offer me the option of TPM-based FDE.

>This attack is a big deal because it provides a workaround to full desk encryption.

Security is complicated, and this statement simplifies the situation too much.


It's only a workaround for passwordless/unattended FDE, right?

Regular password-protected FDE wouldn't be vulnerable this way.


In my view, a setup where you are forced to enter a decent-quality disk-unlock passphrase on every cold boot is a rather hardened setup. The problem is, this is awful UX. If something causes a lot of friction, folks tend to just avoid that thing. And that's why many people just bind the disk encryption key to their TPM and call it a day. Thus leading to the exploit detailed in the parent article.

Once you have physical access to a computer, the sky really is the limit on the kinds of exploits (both hardware and software) you can execute on an unsuspecting victim.

Disk password and no TPM binding? `dd` the entire contents of the victim's disk to an external disk. Then infect the bootloader so that the early initrd (which is responsible for that disk password prompt) will send the key to you as soon as network connectivity is established. Game over.

Secure boot? Pwn it with something like the technique I explained already.

Actually good secure boot (custom PK+MOK and a locked down BIOS config)? Pop the laptop lid, solder some wires (or in some cases, just a SOIC clip) to the BIOS flash chip (example: https://forum.phala.network/t/topic/2584), dump the BIOS, flash an insecure one, then do the same bootloader trick already described.

Of course depending on how hardened the computer being attacked it, the attacks get more sophisticated, but if someone is at the point where they're invading your physical boundaries and messing with your hardware in person, they're probably willing and capable to deploy fairly sophisticated attacks?


Because of things like those described by you, I have never trusted bootable encrypted SSDs/HDDs.

In my computers, I have only non-bootable and non-partitioned SSDs/HDDs, which are completely encrypted with a random 256-bit key, so as long as an attacker has only access to the computer, for instance to a stolen laptop, there is absolutely nothing that can be done to gain access to the stored data.

To boot the computers, I use a small and inconspicuous USB memory containing the boot loader and the kernel, from which the encrypted SSD is mounted and then pivot_root is executed to replace the USB memory with the encrypted SSD as the root device, and then the USB memory is removed and it is not used during normal operation.

The USB key contains an encrypted form of the SSD encryption key, requiring the entering of a passphrase after the kernel has booted, but before mounting the encrypted SSD.


While cool/interesting, i'm failing to understand the attack your protecting against here in the context of just prompting for an unlock PIN.

Particularly if that unlock pin is for a FIDO key or whatever which can also be removed.

I mean, if someone steals your backpack, or stops you at the border, breaks into your house, whatever, they are getting the boot image too, right?

Furthermore, are all your machines AMD Pro or equivilant (with the encrypted RAM enabled)? Because the disk encryption key is probably just sitting unencrypted in system RAM otherwise, and is susceptible to people freezing the ram and removing it to another machine to extract the keys.


The attacks "evil maid" described by a poster above are impossible.

Even with physical access, the computer does not have any boot loader or kernel or any other executable that could be altered.

The attack described in the thread title also does not work, because the computer cannot boot. Even after booting from their own device, attackers cannot do anything useful. Reading the encrypted SSD will not provide any information and writing it will be detected later. The SSD does not have any non-encrypted sector.

Obviously I do not keep the USB key with the computer, especially when the computer is not with me. It is never put in the computer backpack.

Of course, if someone would watch me to discover how I start the computer and then they would capture me with all my belongings and then they would do a thorough search they might find the key and they might torture me to get the passphrase.

Nevertheless, against this kind of threats, there are no computer solutions. The only thing that would work would be the use of armed guards.

On the other hand, my method is not vulnerable to trivial attacks that could be done without my knowledge, like the one described in the thread title.


Sounds like a great option. Would be nice if there was an install wizard that could configure that.

Ps I wonder what inconspicuous usb drive you use?


I use various Corsair or Kingston USB flash drives, with sturdy fully metallic cases, and which are just a little larger than a USB connector, i.e. they just have an ear that remains outside the connector when inserted, to allow for their extraction.

An example:

https://media.kingston.com/kingston/key-features/ktc-keyfeat...

I have also used MicroSD cards together with a very small USB adapter, which is also only a little greater than the USB connector.


Are you still using LUKS for these? How are you invoking decryption with the pass phrase protected key?


I am not using LUKS, I am using a custom kernel module that implements a block device that presents to the kernel the decrypted SSD. The kernel module receives the key when it is loaded, then it creates the block device that is eventually mounted as the new root device.

I do not know if LUKS could be used for this, I have not examined it. IIRC, LUKS stores the actual decryption key in the encrypted disk (protected by a passphrase) or in the TPM, like most commercial products for disk encryption, which are methods that I do not approve.


LUKS supports detached headers, maybe this would be useful for your setup? https://wiki.archlinux.org/title/Dm-crypt/Specialties#Encryp...

Your approach sounds pretty cool, by the way. I've thought about such an approach in the past, and I've used it for some auxiliary computers under my control, but not for my daily driver. I have a Framework laptop and I could indeed use this approach in quite a stylish way, though: https://frame.work/de/en/products/storage-expansion-card?v=F...


Thanks for pointing to this LUKS feature.

The last time when I have looked at LUKS was some years ago, when this feature did not exist yet.

In my opinion, this is the only right way to do SSD/HDD encryption. The detached header allows plausible deniability and it avoids downgrading the strength of the encryption key to the strength of the passphrase.

By using the detached header option, LUKS could be used exactly like in my custom setup.


There's also voltage glitching which has been used with great success as well[1][2].

[1]: https://web.archive.org/web/20190801014726/https://www.cl.ca...

[2]: https://arxiv.org/abs/2108.04575


Indeed! Voltage glitching was used to jailbreak Tesla recently! I believe it was also used to jailbreak the Nintendo Switch.

If a discrete TPM (separate chip on the mobo, rather than an fTPM which runs in the CPU/SoC) is in use, one can also use bus sniffing to pwn TPM protection: https://blog.scrt.ch/2021/11/15/tpm-sniffing/


For the Nintendo Switch, it was used to dump some firmware which was then analysed and found to have a buffer exploit (can't remember what sort), which let to the famous Fusee Gelee exploit on early switches. IIRC it was a security chip which is both ironic and incredibly useful because it couldn't be patched without a hardware revision and it ran before anything loaded from the onboard storage.


Is there a way to do 2FA yubikey style disk encryption? Ie. requires a passphrase + user's yubikey to login?


Yes, see https://askubuntu.com/questions/599825/yubikey-two-factor-au...

Note that this is via a second slot, the original passphrase-only key still works unless you take steps to remove it.


The thing that prompts for your password is necessarily unencrypted, the evil maid just needs to modify that to e.g. log the decryption key. On linux, this is usually just a shell script invoking cryptsetup somewhere in the initramfs image.


Maybe, it is possible to use both a firmware unlock against an OPAL encrypted drive, and validate the signature of the initrd/UKI as part of secure boot. Either or both protect against this to a certain extent depending on configuration.

As does of measuring all of the above into PCRs that are unlocked with by the utility prompting for a pin used alongside the PCRs to unlock the key.


Depends on the constraints of the scenario.

With "cold boot attacks", after you cut power to a machine the system's memory will still be readable for a brief moment. If you chill the RAM with compressed air you can stave off the electrical self-discharge even longer. Boot to a specialized OS or move the memory to alternate hardware and you can dump its contents.

These attacks specifically target disk encryption and other keys that are kept in memory.


The code that displays the password prompt and unlocks the disk could be replaced by a modified copy as it isn't encrypted. TPM would protect against that though.


That just defeats secure boot, it doesn't defeat TPM backed drive encryption, which only releases the key if the OS is unchanged, otherwise you could just defeat it by booting a different signed OS.


That depends on which PCRs you bind your TPM-backend encryption key to. See this list: https://uapi-group.org/specifications/specs/linux_tpm_pcr_re...

As an example, Arch Wiki encourages you to bind to PCR 0+7: https://wiki.archlinux.org/title/Trusted_Platform_Module#sys... ... both of those are firmware-level PCRs, not OS ones.


PCRs 0+7 might not be sufficient if you don't have an up to date dbx (revocation list) - older versions of shim won't (I think) extend into PCR 7 enough information on what they chainload. This means a malicious bootloader could fake the correct PCR extensions.

Updating dbx to a version which revokes versions of shim with this issue would fix this I think, as would including PCR 4 in the list bound to, to make doubly sure that the version of shim is new enough not to present a risk.


I don't agree. What Ventoy does is simply enabling the user to do what they have the rights to do already in a more user-friendly way.

If the user doesn't have BIOS admin access this won't work. If they do they can just enroll the new key through the menu.


A simple fix might be to bind the encrypted value to a PCR (hopefully one that isnt too fragile, but prefs one that measures the initrd) and then to invalidate that PCR when you drop to the recovery shell (by extending some junk bytes to it).

But if you can't find a PCR thats both not too fragile and measures the initrd, then youll have to settle for sealing the encryption key to a fairly static PCR, in which case the attacker could just boot into another OS and then do the right PCR extend dance to get the disk unlock key.

Its the combo of secure boot + disk unlock sealed to a PCR that is meant to get you most of the way there. Agree with other comments that evil-maid style hardware mod attacks are basically impossible to defend against, and practically most ppl attack model this as whether you can pull the disk key in X minutes rather than at all.


On my laptop I have custom Secure Boot keys. I know I did not sign any other OS (well, I did - but with a different key), so this attack won't work, unless the attacker can somehow trick the unmodified previous version of the kernel + initrd combo into asking my passphrase and saving it somewhere, which is highly unrealistic.


If they have physical access as per this scenario, they can just install a hardware keylogger and get access with your login credentials.


Or even just hit OP repeatedly until they tell the password which is likely how many of the cases end


There's a WIP PR in the systemd repo that does exactly that. it's called pcrlock. Will come to your local distro soon :)


I have long advocated for disabling tpm in bios, uefi-boot raw dm-crypt to even get grub much less init. This is also how I have done encrypted disks in the cloud using dropbear ssh as an initram shim for key/pass entry. Bios boot pass is annoying but required. Watch your acess/auth logs. Run a HIDS. Isolate your procs and especially their network comms. Security is an onion, not that most c-suites have any idea these days, blinded by fast talkers.


if security is an onion, why do you advocate for throwing the baby out with the bathwater?


Could you be more specific please?


What's your way of providing laptops to your employees? For simplicity, let's assume everyone is located in the same country.


Setup in house via imaging then control once vpn is established via cac tooling. I've run all linux laptop fleets this way before so it does work but I have some ideas on improvement. PXE is a weak protocol in the stack for example.


The article title is very misleading. This isn't bypassing FDE in any way. It's just getting a root shell on a machine you have physical access to with a particular boot configuration.

Clever? Yes. But no encryption is bypassed.

Most systems will only be listening to PCR 7 anyway, so a similar attack could be done by loading your own custom bootloader, or possibly reading messages on the SPI bus when booting. This is just a nice trick that's easier/faster.

There is a balance of convenience versus security and this could be prevented easily by disabling recovery shell or registering more PCRS (with correct boot setup), but would be much more annoying to remotely administer since you could get failure states where the TPM won't release the keys in a variety of situations.

Ultimately TPM-only unlock is a significant increase in security vs unsophisticated attackers and probably fine for 99% of people, but isn't something to rely on if you are concerned about sophisticated attackers.

Even with perfect PCR setup and enrolling only custom keys in UEFI, a running machine is still vulnerable. Cold boot or DMA attacks (Thunderbolt or PCI) are just a few that come to mind. These sound extremely sophisticated but are easily done even with hobbyist equipment. Any running machine with currently unlocked disks should be assumed to be possible to compromise with physical access.

If interested in Linux boot chain Poettering has a good read: https://0pointer.net/blog/brave-new-trusted-boot-world.html

There are a lot of interesting talks around Linux boot security in the upcoming All Systems Go! conference: https://all-systems-go.io/

Microsoft has info regarding boot security in BitLocker Countermeasures: https://learn.microsoft.com/en-us/windows/security/operating...


The best hacker I know is my 12 month old.

Give him any kind of device for 2 minutes and you will discover a plethora of UI issues, hidden menus, and security bypasses.


I came home one day and my cat was laying on my keyboard. Above her was a screen with a large amount of random control characters in a terminal with a kernel panic screen. So there's some series of buttons you can press that will crash a Linux desktop from a lock screen.


My favorite bug report in this vein: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug...

> Steps to reproduce:

> 1. Leave laptop unattended in a cold room with a warm cat.

> 2. Cat will sit on laptop keyboard.

> 3. Wait 1 hour.

> 4. Lightdm will become unresponsive.

Also included: “Alternative steps to reproduce if cat is unavailable or uncooperative about being placed on keyboard (see attached pictures)”


Possibly sysrq if you have the key + c or m [1]. Good kitty.

[1] - https://www.kernel.org/doc/Documentation/admin-guide/sysrq.r...


At least for my cat and sad surfacebook Windows machine, I figured it was the device literally cooking it’s ram into a bitflip that caused it to bsod. The cat loves how toasty it gets!


> I figured it was the device literally cooking it’s ram into a bitflip that caused it to bsod.

Funny, that's exactly how I push "control" in emacs.

Ref: https://xkcd.com/1172/


Real programmers induce brainwaves into their cats telepathically so they sit just so to flip the required bit


Mirror [1]

[1] - https://archive.ph/zr3Zf


infinite captcha loop


Are you using Cloudflare's DNS?

Archive.is purposefully gives bad results to Cloudflare's DNS resolvers as they do not respect and pass along EDNS subnet information which Archive wants to run their own CDN.

https://jarv.is/notes/cloudflare-dns-archive-is-blocked/


Cloudflare also gives captcha loops if you have a number of different VPN extensions installed.


Trying to access Archive on Mozilla with my only extension disabled (uBO) also results in captcha loop...


The problem is probably Mozilla (Firefox?), it's a third-class citizen these days.


I am on Chrome.


I'm using Chromium, uBO, and 8.8.8.8 with no issues regarding archive.is, so... ┐( ̄ー ̄)┌


I tried it in a pretty vanilla Edge and it worked. I guess that's two reasons to keep Edge around: download browser of choice, and use Archive.


Today it works for me, same setup, same DNS, same browser. :???:


Also seeing infinite Captcha loop, not on Cloudflare DNS.


.ph was working for me yesterday and is working today but most of the time I'm stuck in an infinite captcha loop. I'm using Google DNS, Safari on iOS, and no extensions. I dunno. I figure Cloudflare is just broken sometimes.


Interesting. I've never seen a captcha on that domain. Maybe uBlock or NoScript is preventing it for me. The admin is on HN so maybe they will see this.


They suggest modifying the kernel commandline to disable the root fallback. Does secure boot include a hash of grub boot parameters? Couldn't the attacker just change the command line back (or to an even easier to exploit config?)


In a "properly" hardened setup, you would protect the GRUB command line with ... yet another password! :) https://help.ubuntu.com/community/Grub2/Passwords

But see my sibling comment about the possibilities of trivially bypassing Secure Boot.


Grub passwords don't protect against the attack I was thinking of, since you can just pop the drive + edit the grub config:

"Errors in creating a password-protected GRUB 2 menu may result in an unbootable system. To restore a system with broken passwords, access and edit the GRUB 2 configuration files using the LiveCD or another OS."

However, it didn't occur to me that you can just press "e" at the grub prompt, then modify the command line without physical access. "Trivial" indeed.


> since you can just pop the drive + edit the grub config

My current Secure Boot configuration only allows booting a signed GRUB EFI image which contains the configuration. Modifying it on disk would invalidate the signature, causing Secure Boot would fail. My `/boot` isn't encrypted, but each file that GRUB accesses (eg. the initrd image, vmlinux, background.png...) also has a `.sig` file and GRUB refuses to load any unsigned (or invalid) resources. This means that GRUB doesn't need a password to get into the initrd, and I can just enter one password from in there.

Next, I'm considering tying user data decryption to login and allowing the root system to be unlocked by the TPM. It seems like a good compromise to me, as I don't keep persistent data on `/` anyways. The host SSH key will be there, but still protected by the TPM and the above chain.

Edit: This would be better with aggressive measured boot parameters-- I don't care about losing `/` to a tempermental TPM, and that SSH host key is otherwise somewhat vulnerable. I'll have to learn more about measured boot and PCR.


This is what I do for my laptop - I build a custom GRUB image which enforces GPG signatures (including on grub.cfg) using grub-mkstandalone. This also has a built-in configuration which enforces passwords for editing boot commands. That GRUB efi image is signed by a custom secure boot key which I enroll. Kernel and initrd are signed by the gpg key (and the kernel also has to be signed by the secure boot key otherwise it won't load in this scenario).

The root FS is then encrypted using clevis to lock to the TPM PCRs (only). I use PCRs 0,2,4,7 for this. So the laptop will boot to a login screen without needing a password.

My home directory is separately encrypted and gets unlocked with the login password using pam_zfs_key. It works pretty well and I'm happy with the security for my threat model (casual theft is really my main concern).

I am very aware that my home directory stays unlocked unless I actually power down the machine though.


It's not the first bug of this type: https://www.engadget.com/2015-12-18-log-into-most-any-linux-...

I recall Gnome had similar issue, though can't find a link right now.



Sounds like it would have also worked to replace the hard drive with an own one where the OS directly gives a root shell to read the TPM secret.

The solution is to bind the TPM secret to a measured boot state, but that requires a way of precalculation of the expected measurement values. Read more about signatures for these precalcuated measurement values here: https://0pointer.net/blog/brave-new-trusted-boot-world.html (TLDR: The publishing of prebuilt OS vendor kernel+initrds and signed PCR policies allows to bind the secret to the signature being valid, so that it still works when auto updates change the measurement as long as the vendor signature is also there.)


The point of this article is that this attack doesn't change any of the TPM measurements.


Ther, was this awsome passkey I saw recently that you could only use by uploading programs to it (vs. using the TPM API), and then that program's hash would occupy a PCR slot. This would allow minimal protocols, specialized to their use cases, with updatable API's-- but I'm blanking on the name!


Full disk encryption is definitely more work than profit

Unless you really need it, prioritize convenience and data recoverability

Better to have an encrypted volume with sensible data


I'm imagining a scene in a Bond movie where 007 casually bypasses encryption by mashing the enter key. Actually I don't think even movie studios would think that looks realistic.


I don't see any encryption bypass there. Encrypted partition stays encrypted.


Did you read the article? ;)

> From here it’s easy to manually use the TPM to unlock the disk with the Clevis tooling and mount the root volume for hacking (it takes a few tries sometimes, but it gets there in the end):

They use the exploit to get dropped into a root shell, and then asks the TPM to unlock the disk for them, which it promptly does.


Technically, but really the fault lies in the OS for providing an exploitable prompt that doesn't break the chain of custody of the boot process on failure.


Ubuntu developer here. I don't think that's fair. The OS, as shipped today, isn't designed with this threat model in mind. It's correct to say that for TPM-based LUKS unlock to be safe, the initramfs must not allow the user to take control of it. But that's a new requirement introduced when the user modified their system to configure TPM-based LUKS unlock. This isn't the responsibility of an OS that doesn't ship with that support, and in fact, prior to TPMs, it was perfectly reasonable to allow the user to take control of the initramfs prior to LUKS unlock!

That's not to say that an improvement can't be made as we move towards a future where TPM+LUKS might be the norm - just that you can't retrospectively claim fault on an OS that doesn't claim to support that.


> This isn't the responsibility of an OS that doesn't ship with that support, and in fact, prior to TPMs, it was perfectly reasonable to allow the user to take control of the initramfs prior to LUKS unlock!

And it still is perfectly reasonable behavior. Any distribution that decides (perhaps as a consequence of this) that on any boot process abnormality the default response is to enter an immediate reboot loop would be called crazy.


Sorry but Clevis (similarly to bitlocker without PIN) is security theatre.


> it was perfectly reasonable to allow the user to take control of the initramfs prior to LUKS unlock!

It’s still looks perfectly reasonable, just PCR should be fed with some value before that, maybe a hash of initramfs file? It seems to be reasonable as at this moment the state of the operating system differs from the one properly booted, which seems to fit the idea of secure boot.

This way it would be possible to still unlock LUKS with a pass phrase but not access TPM keys locked for configuration when initramfs shell is not running.


I think it is fair to say that OS configuration to fix this should be required.

I agree with the logic that by default the OS is doing the right thing without any changes to support this model.


This is one of those designs where you skim a description and already know it’s going to break in a million different ways.


I still don't understand why people believe that you can have any expectation in these ludicrous scenarios.

Like, you are dropping a general purpose computer into the middle of Russia, expect to be able to command it remotely to do anything, even remotely update it and reboot it; and at the same time never bother to come check it up in person or even have minimum chassis intrussion detection. Do people really expect this to work just by adding some TPM craziness?

The article is just connecting a keyboard sniffer/simulator but it would have been as easy to do anything to the network traffic, SSD, motherboard/CPU JTAGs, RTC, etc.


Well, it's sorta possible. Just not with the TPM, and not in general purpose computing applications.

Games consoles are super locked down, to prevent piracy, and some modern consoles have gone years without a successful hack, despite being in the physical possession of the attacker. The iphone's activation lock is extremely hard to bypass, and even cops and border guards struggle to extract users' data.

Simply buy your PC from your operating system vendor directly, and forego options like being able to replace components and being able to install your own software, allowing the memory and PCI bus to be encrypted. Add some cloud backup features so the device can wipe itself at the drop of a hat without losing your data. After that, just have your OS vendor produce perfect code with no exploits, and you're secure!

Simple! /s


100% this is sketch. I was thinking if this was a supported security model you shouldn't be able to take control without disabling the TPM.

But it isn't so not the OSes problem.


I did. It says 'From here it’s easy to manually use the TPM to unlock the disk with the Clevis tooling and mount the root volume for hacking (it takes a few tries sometimes, but it gets there in the end)'

However screenshot says 'Unsealing failing'. Yet Ubuntu-lv is mounted. I don't follow how Luks password was guessed for it.


There wasn't a password necessary, the TPM was an unlocking mechanism.

Secure Boot with TPM-backed disk encryption works off of a series of numbered hashes. The idea of TPM based FDE is that the machine will use Secure Boot to boot only a software chain that the end-user trusts not to contain authentication bypasses. In Secure Boot, the EFI firmware provides hashes of each stage in the boot chain to the TPM, and the TPM only unlocks the full-disk encryption key (really the key encryption key, since the TPM isn't fast enough to actually decrypt the disk) slot if each stage / configuration is valid.

This issue breaks that chain. In some sense it's an illustration of this system being silly conceptually, but it is a real issue IMO.


No LUKS password was guessed, clevis-disk-unlock command in the last screenshot used the TPM to provide a key to a LUKS keyslot for getting at the actual decryption key to decrypt the disk. The TPM should have had information about the boot state to be able to refuse to provide the key, but didn't.


The system that should only unlock the drive after the appropriate remote command has been provided, unlocks the drive without the remote command being provided. That's the problem.

I'm not sure why you would rely on just the TPM in this case, though. TPM only disk encryption is rather risky, you'd expect a TPM+PIN setup at the very least.

You'd still be at risk because of this flaw, because the root shell would allow sniffing the key from a secure session. Ideally, attempts to brute force the user account should prevent further attempts by rebooting or refusing further interactive input.


> I'm not sure why you would rely on just the TPM in this case, though. TPM only disk encryption is rather risky, you'd expect a TPM+PIN setup at the very least.

I think the target market is "I have a server in a data centre, I need unattended boot, I don't really need a high grade of security I just need to tick a checkbox saying the hard disk is encrypted"

If your organisation is large enough to start losing track of entire servers, and yet small enough you can't adopt effective organisational controls to prevent such losses, even mediocre encryption might give you some peace of mind - and it lets you avoid reporting data breaches, as the lost data was 'encrypted'.


Looks like so. From https://rogueai.github.io/posts/arch-luks-tpm/#unlock-the-lu... 'From a security point of view, passwordless LUKS unclocking might look like we’re giving up some security, as booting will go straight to login without asking any password whatsoever. We’re indeed trading a bit of security in favour of convenience, it’s important to note though that binding the LUKS to the TPM ensures the volume will only unlock in our machine, with Secure Boot enabled and our signed boot image.'

So there we are somewhat breaking 'Secure Boot' process in general.


Another potential use is encrypted root with home directories subsequently requiring login password to decrypt (using pam_ecryptfs, pam_mount, etc). Less secure than root fe needing a PIN/password but can still defend against some threat models.


It's not a totally meaningless check box. If the key for decrypting the disk is in the TPM, this fixes the case where the drive gets pulled and thrown in a recycle bin, then someone recovers data from it later.


> this fixes the case where the drive gets pulled and thrown in a recycle bin

Also allows you to do that when you retire a server or cluster. I've been in situations where we had to wipe hundreds of multi-TB hard drives. If you've never done it, that takes a good deal of time. You can get appliances that do it, or try to build your own DBAN rig, but it still takes days. Or you can just shred them, but hard drive shredders are not cheap either, and that's rather wasteful and may not be environmentally conscious.


Yes, when I say "small enough you can't adopt effective organisational controls" I mean organisations that are large enough that they're discarding so many disks they might accidentally forget to wipe some, and yet small enough they don't have procedures and record-keeping that prevent such accidents.

A large organisation will usually have tedious checks and record-keeping for wiping and discarding hardware, probably instituted after they wiped and discarded the wrong hardware.


It is not about forgetting to wipe a disk. A realistic scenario is that an SSD fails in a way that it ceases to be recognized by the system. At this point, you have no way to wipe it. Still, you may be required to return it to the vendor (by the contract that gave you the discounted price in the first place) - and they can read it using their tools not available to mere mortals.


In addition to sibling replies, I'll add that a common important use for encryption (and reason to have it be completely standard 100% of the time, even in fully transparent mode) is storage EOL procedures. It's much easier/cheaper/safer to get rid of an HDD or SSD and feel confident everything is gone if it was all fully encrypted from the beginning and you just need to trash the key.


The threat vector mitigated by Clevis[1] is someone with physical access (e.g. an insider) removing the server from the data center and being able to access its data.

[1] https://github.com/latchset/clevis


I don't see any discussion of threat vectors on that page, but Clevis clearly fails to mitigate the threat vector you describe.


You can't read encrypted data without the key


So to double check my understanding of this article and the linked one that led to it[1], the issue is:

1. the TPM does not require a password to get the decryption keys

2. bootup decryption passwords are checked by code, before asking (using?) the TPM's data (passwordless, because the bootup code is trusted)

3. exhausting multiple layers of retries causes ^ part 2's code to accidentally get/use the decryption data from the TPM

And assuming that's correct....

WTF

Isn't that stupidly insecure design, because even with trusted-boot checks it's probably trivially bypassed with hardware access (edit: like cutting traces)? Why even offer a bootup password if that's how it's implemented?

Like... at the absolute minimum, a sane design would not completely trust the TPM. Combine the TPM's key with the password to get the real key. No additional steps for users, and it eliminates all TPM-only attacks (subject to your password strength... but currently that is 0 so anything is an improvement).

[1]: https://hmarco.org/bugs/CVE-2016-4484/CVE-2016-4484_cryptset...


TPMs typically won't give out a secret unless all of the software which has been loaded and executed was "measured" and found to be unmodified from when the secret was stored. So you couldn't simply stick your own hard disk or USB disk into the computer and then ask the TPM for the secret: the running software wouldn't match, and the TPM would refuse.

This is a problem because the "approved" software has this strange vulnerability to get into the rescue shell, right at the point where the TPM would be happy to give you the secret because the software is unmodified.


Which wouldn't really be an issue if all the trust wasn't unnecessarily put on a single system, yes.

This is the equivalent of a rubber hose attack, where your OS vendor is being threatened and can give away your information retroactively. That's ridiculous to even consider allowing if you're selling something as "securely encrypted", just make it E2EE (derive the key rather than directly using the TPM's data).


On further reading and wondering what some of these terms / tools are: since this is describing a system with un-attended decryption, it seems like it literally requires passwordless access to the disk encryption keys.

If that re-reading is correct: yeah, there's not much choice but to put all hope in your code + the TPM, since you essentially can't wait for a password. You get what you pay for in that case - no password means no password, so you're inherently vulnerable to exploits like this.

The linked article, https://hmarco.org/bugs/CVE-2016-4484/CVE-2016-4484_cryptset... , seems to describe this kind of setup: the exploit gets you root in the boot partition only, but not any encrypted volumes, so encrypted data is still encrypted. An attacker could get access to anything not protected otherwise though, e.g. unencrypted volumes and hardware, which is a potentially big problem for cloud environments but not so much personal device theft (they already have the hardware).


Nah, I'm doubting myself more here as time passes.

If it's meant for unattended booting, why/how is there a password?

This smells more and more like an extremely poorly thought out system that allowed a bug to compromise the system.


There's no viable reason to reboot a Linux box. Even for kernel upgrades you have kexec, and there are ways (hacks) to hand over the LUKS keys to the new kernel.

If you need to reboot an encrypted box remotely AND have it automatically decrypt without you knowing the key, you've already lost the game.


My experience with kexec has been poor. Every time I kexec, I have to load and unload a bunch of drivers to restore the system to a functional state. Critically i915 sometimes needs to be reloaded after boot to get the display to actually work. And on a VM I had issues with the `cirrus` display driver failing to unload correctly leaving the system unable to boot. Its random and totally unreliable for desktop use.

Aside, this is a nice script that prompts for the LUKS password _before_ kexec: https://gist.github.com/webstrand/381307348e24c28d5c4c9a5981... it does assume you're using opensuse's naming convention of calling the root partition cr_root. I used this because my bootloader was also encrypted, so rebooting into an initramfs with ssh was impossible.


I used to have a custom NAS with full disk encryption, whose bootloader would spin up a tiny SSH server with very few features, mainly only access to an unlock function that would allow to transmit the encryption passphrase over the said SSH tunnel. Then the SSH connection would be closed by the server and it would startup with the decryption key.

That way, it was impossible to access the data physically, but I was still able to reboot the box whenever I wanted without any problem.


That's sensible, unlike trusting that TPM+GRUB+dracut+systemd+getty+logind+whatever all work fine and won't leak your key.

Rebooting a box is often the easiest way to get something done if you can't be bothered with exotic configs, I just don't believe that remotely rebooting a encrypted box for which you dont have the key is ever a reasonable thing to do, and that's the assumptuon this article is based on.


Might I suggest that Mandos might be a slightly easier version of a solution to that problem?

(Disclosure: I am a co-author of Mandos.)


> There's no viable reason to reboot a Linux box.

Even cosmic rays flipping bits in your RAM in a way that can't be recovered from?


If that's a big issue for you consider using ECC memory, or even better - colocating your servers in an abandoned mine.


What about libc updates? If you need to restart effectively every process on the system anyways, why not reboot?


Only rolling distros should be shipping mid-release cycle updates that break libc compatibility.

It's not a normal thing for release-based distributions where userspace<>userspace compatibility is nearly assured during a release.


Compatibility does not need to break to require a restart of a process. Linux does not have a mechanism that tells processes to reload their shared libraries when they are modified on disk. When a .so file is updated on disk, for example a minor update for a security vulnerability, all processes using that library need to be restarted to utilize that new update. If you're not restarting processes on these upgrades, you are not getting their benefits.This is why tools like dnf-tracer-plugin exist, but it's often just as easy to reboot, especially if you built your systems to be fault tolerant.


static link musl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: