This brings back memories. I was an international student from Greece at the University of San Francisco, that fall, and I was present at a CS presentation that took place at the time (I'm not sure if it was Phill or not, but definitely one of the original authors). At the end of the presentation there was a pile of floppies that you could get a copy of the software. I got one along with my other classmates.
Unfortunately, I was approached a couple of days later and asked to return the floppy back. My status as an international student did not allow me to have a copy ( arms, munitions export rules, whatever). I felt really upset about it, since I was the only one asked to give my floppy back. I might still have a copy of the floppy in a box in the attic somewhere.
Oh man... I got to work w/ prz for several years (which is why the "two n's" thing stands out in my memory, since it's a common typo). And yet somehow, I still managed to screw that up...
It was on this day in 1991 that Pretty Good Privacy was uploaded to the Internet [... then...] a number of volunteer engineers came forward and we made many improvements. In September 1992 we released PGP 2.0 in ten foreign languages
Fun fact: "We made many improvements" is doing an interesting lift in this paragraph, since PGP 1.0 shipped with a cipher of Zimmerman's own design, Bass-O-Matic, which lived up to its name in part by being demolished by Eli Biham over a lunch. PGP 2.0 introduced IDEA, and, I believe, the first (for the era) cryptographically credible version of PGP; a more fitting anniversary to celebrate, perhaps.
(We've learned quite about about how to engineer cryptography in the ensuing 29 years, and PGP hasn't kept up --- can't, really; so on the 30th anniversary of Bass-Free PGP, we might fittingly celebrate by finally giving PGP a well-earned retirement.)
I just downloaded the source code for PGP 1.0 and found BASSLIB.C. 1988!
It's interesting to note that the Bass-O-Matic design apparently came from a Navy contractor (Charlie Merritt). It seems possible that Merritt and the Navy both thought this design was secure; maybe they didn't have clearances to learn others' opinions about it, and only used it for unclassified communications.
What do you recommend as a replacement for PGP? (I'm looking for stand-alone software I can use to encrypt files on storage media, not an encrypted e-mail service.)
The high bit of the right answer to this question is that you don't want to replace PGP; one of the things we've learned in 29 years is that you don't want a single tool to do lots of different cryptographic things, because different applications have different cryptographic needs.
For package signing: use something in the signify/minisign family.
To encrypt a network transport, use WireGuard.
To protect a web transaction on the wire, TLS 1.3.
For transferring files: use Magic Wormhole.
For backups: use something like Tarsnap or restic.
For messaging: use something that does Signal Protocol.
To protect files at rest, use encrypted DMGs (or your OS's equivalent, like encrypted loop mounts).
To encrypt individual files --- a niche ask --- use Filippo's ungooglable "age".
It would be nice if there was a reasonable baseline of key format/management that all these new tools shared though. And I don't mean pgp/gpg keyrings, to be clear, which are cumbersome and error prone and over-complicated for a lot of these needs (I'm still annoyed that `pass` is reliant on gpg for forcing me to deal with a keyring).
I see that `age`, which I hadn't actually heard of before, supports ssh keys and identities which always seemed like a fairly natural baseline to me. Would be nice if more of the command line variety did as well (even if only rsa and/or ed25519 keys).
Why? Why is that good? One of the basic cardinal sins of cryptography is using the same key in more than one context. I don't understand the value of the SSH thing, either.
I don't want to use the same key in multiple contexts, but I think some norms around how keys are stored would help build best practices around them.
It would also potentially mean easier to use with hardware tokens, if you could just expect to be able to use something like gpg-agent or ssh-agent or something in between to work with various things.
Last I looked "age" did not have any sort of recovery utility. It isn't even clear that such a utility is possible (the protocol is poorly documented). A single bit error at the start of the file causes the loss of the entire file. So be careful using it for any sort of thing that might ever require the sort of recovery that bzip2 or lzip provides (gzip has a third party recovery utility).
OpenPGP has excellent recovery properties out of the box BTW...
I want to be sure I'm reading this correctly. Failing to decrypt on a single bit error is the literal job definition of authenticated encryption. Any error you accept is malleability conceded to an attacker. Are you complaining that age isn't malleable enough?
If I run a file through age, and then run that through a Reed-Solomon encoder, I now have a file that can be decoded even with single bit errors. But I think I also still have authenticated encryption. The cost is that my file takes a bit of extra space.
You are not. Of course, the problem is that PGP doesn't get its informal resilience to single-bit errors through error-correcting codes on the ciphertext.
To be clear, OpenPGP does not correct errors. You still end up with corrupted data. It is just that you get back all your good data, no matter where it is in the file.
So you're saying that not only does OpenPGP release unauthenticated plaintext to callers, but it doesn't even implement the feature that takes advantage of it. Good note.
What's crazy about this is that you can get error correction without using insecure 1990s cryptography, simply by forward error correcting your ciphertext. I'm really having a hard time even getting my head around the argument you've managed to devise here.
>Failing to decrypt on a single bit error is the literal job definition of authenticated encryption.
Yes, this is an excellent example of where this behaviour is suboptimal. We should not cargo cult authenticated encryption. It has its place but this isn't it.
> Any error you accept is malleability conceded to an attacker.
Sure, but malleability that has a close to zero chance of being a problem. We are talking about static encryption here. You only get one chance at malleability and the user immediately knows something has gone wrong:
gpg: WARNING: encrypted message has been manipulated!
>Are you complaining that age isn't malleable enough?
Merely pointing out that for the most common use case age is objectively worse than GPG.
I think you are doing a fine job of advocating the position that PGP is the cryptosystem of choice for people who believe authenticated encryption is a cargo cult.
The logic you're applying here about how GPG can warn you if it has employed "recovery properties" to correct "single bit errors" was embraced enthusiastically by the Ruhr team to perform data recovery on other people's PGP-encrypted email messages.
Efail is an excellent example of how the static encryption of OpenPGP used for encrypted email makes malleability irrelevant. To make the Efail attack against OpenPGP work requires the knowledge of the first 11 bytes/characters of the unencrypted message. The attacker would only get one guess. The recipient by necessity would see the attack message and would immediately know there was something going on. As a result the Efail attack against the malleability of OpenPGP is completely impractical.
Contrast the Efail situation with that of TLS. TLS allows almost unlimited secret trials against its cryptography by an attacker. There have been multiple practical attacks based on such oracles in TLS.
>This isn't even a controversy among cryptographers or cryptography engineers.
I would like to think that there were such people out there that understood that different techniques are applicable to different problems.
You can just read the Efail paper and see how faulty this argument is. Because PGP treats authenticated ciphers as a "cargo cult", the Efail team could plausibly have broken almost 40% of Facebook's PGP-encrypted password reset mails. Further: they were able to abuse MIME encoding to get multiple guesses per email.
At any rate, none of this is something that anyone would accept from any modern cryptosystem, and the fact that PGP has you so backfooted that you'd feel the need to defend PGP's behavior here is a telling indication. "PGP: it's fine, as long as you don't use it to encrypt password reset emails. But for other emails it's fine, as long as the first 11 bytes of the email aren't guessable." Ok. Good note!
Maybe we should just put 128-bit nonces at the tops of all our emails. That just seems like common sense good engineering practice.
>Further: they were able to abuse MIME encoding to get multiple guesses per email.
The paper did not provide any example of an email client where this would work. I have as of yet not been able to reproduce this. Since there would be no practical reason for such behaviour the assertion requires some sort of proof.
I am not suggesting that email clients should entirely depend on OpenPGP email inherent resistance to oracle attacks, only pointing out that it exists. Presumably the Enigma operators were not putting their decrypted messages in envelopes before sending them off to their enemies. Efail was primarily a straight up plain text leak.
Plain text attacks come in distinct categories. The block ciphers used in OpenPGP are generally considered to be immune to the sort of plain text attacks used against Enigma.
I think when you're at the point in your argument where the bar you're setting is "immunity to the attacks that broke Enigma", you might as well just concede.
OpenPGP specifies cipher feed back (CFB) mode[1] for block ciphers (normally AES). CFB has the inherent property that it is self healing in the face of corruption. It does not work in all possible instances of corruption but it works for the normal sort of corruption that is seen with mass storage devices (multiples of 512 bytes). A practical example here[2].
I find "age" a bit awkward when it comes to handling private keys. I want to store my private key files encrypted. If it just would accept private key files that have been encrypted with a passphrase it would be almost perfect.
There are ways to work around this [1], but I'm not happy with any of them.
Which of these do I use to send encrypted messages to darknet vendors based on well-known public keys shared through Reddit etc? This is the number one use case of PGP for me and it doesn't seem to be solved by any of these other tools.
It is maddeningly difficult to simply encrypt a file with interoperable tools. I'm optimistic about age, but it's early days. I understand why people want to use 7z for this. The problem is, if anyone else tries to send you an encrypted zip, and they use something other than 7z, they will likely end up using something even worse than Bass-O-Matic; the zip encryption ecosystem is unbelievably horrible, and it's intractably difficult for ordinary people to know what they're getting. So: long story short, I would tell a client "never, ever use encrypted zips".
I don’t understand why people want to replace PGP. Of course we can improve the technology but the fact is that security is hard and requires interplay between the humans, processes and technology to work. It’s not enough to rely on just one of those pillars.
There is an illusion in the world of IT that we can solve everything with technology.
Maybe the reason why people don’t want or like PGP is because it needs strong human processes to work properly and keep its integrity, and that breaks the illusion that you can easily solve everything with tech.
> Maybe the reason why people don’t want or like PGP is because it needs strong human processes to work properly and keep its integrity, and that breaks the illusion that you can easily solve everything with tech.
What you've got here is pretty much the mirror of the argument you've dismissed a paragraph earlier. Now you're desperate to rely on humans instead.
This makes me think about Snowpiercer, for two reasons. One is that Snowpiercer has this ludicrous conceit about replacing unavailable engine components with humans but the other is that we've really been here with the actual railway trains in the nineteenth century.
There was a pattern. One of these new-fangled railway trains crashes, often killing many people, the company directs public blame toward the driver, who will be portrayed as incompetent, drunk or worse and so fully responsible for the accident. Nothing changes, rinse, repeat. How was this cycle broken?
We did not find some species of super-human train driver, instead we invented technology such as the Absolute Block system, Interlocking railway signals, the Dead Man's Handle. Even apparently trivial technologies like the Driver's Reminder Appliance (it's just a switch!) are still technology.
PGP isn't very good technology. Like one of those early mechanical signals that might seem to indicate "clear" but it's actually just weighed down by snow and frozen in place so that it can't indicate "danger" instead, the way forward isn't "We need to rely on super-humans to compensate for the short-comings of the technology" but "We need a technology that sucks less so the humans don't need to be super-human to succeed".
What part of PGP precisely? I've read a lot of criticism of PGP but they were either focused on a specific (catastrophic) implementation such as GNUPG, or were really skeptical of usage by non-technical humans.
I know quite a few people doing PGP email with Thunderbird and they're pretty happy with it. It's also very convenient that their GNOME-based Tails operating system has PGP sig verification enabled as context menu entry in the file manager, same for encryption/decryption.
Basically, once you know what public/private keys are, you've got all you need for secure communications. Is that a bad thing? My only HUGE criticism of PGP is with the key servers. It's getting better now with WKD, OpenPGPCA, etc.. I'm really excited about the Sequoia project. From their blog/docs it appears all my criticisms of PGP are being addressed.
When PGP was invented, a bunch of things we are now quite sure how to do either were experimental or hadn't been discovered. In some of those cases you can retro-fit to PGP, so e.g. you can use a nice elliptic curve signing primitive instead of RSA, you can do AES instead of IDEA. So far not too bad.
But then in some cases what we became quite sure about is that PGP's principles/ assumptions are themselves wrong. For example, PGP is pretty sure a message ought to have a digital signature from the sender so you know who it's from. But that's wrong, now you're helping the recipient prove to everybody else what you sent them. That doesn't sound like "pretty good privacy" at all. If instead we do message integrity correctly we can assure the recipient that you wrote it, but since they could have forged that assurance they don't have proof you wrote it which they could show to anybody else. They could tell others what it says, but they could just as well make up any rumours they want.
The worst of these problems is the Web of Trust. The Web of Trust can't work. It might work if everybody you know is a cryptographer and everybody they know is a cryptographer and so on. But it can't work in real life, and often in describing it people make revealing mistakes.
Let me quote somebody else making such a mistake (not on HN) and then I'll reproduce part of my response to their mistake in answer:
"How much do you trust guy #53 of 120 you met at FOSDEM? Do you remember how well you checked his ID?"
I am 100% certain my mother is my mother, but I wouldn't trust her as far as I can throw her. And this is where the WoT breaks down. Your trust metric must reflect your confidence that these people will do their part correctly in the WoT, but even conscientious users often don't understand how to do their part correctly, so realistically almost everybody's "trust" indication for almost everybody should be zero. At which point it's not a "web" it's just a bunch of unconnected points.
It's funny that you could make literally the same argument about C, especially since so much of the PGP ecosystem relies directly and entirely on software built in C.
I think PGP is fine for encrypting some files. But you should take into consideration that when you encrypt a file it's very likely that some (or all) of the plain text is left behind on the block device. Using full disk encryption (LUKS for example) will give additional protection (so one can't read plain text fragments from the device when it's not "open").
The other points in this thread recommending modern, context-specific cryptography tools are very valid, however if you're set on PGP; Sequoia[1] is a new(ish) OpenPGP library written from the ground up in Rust.
Sequoia is fine work, and running less C code is a good thing, so if you have to use PGP (sorry) then use Sequoia. But note that the PGP ecosystem is hopelessly mired in GPG, and the problems go beyond C code --- Sequoia, for instance, is (AFAIK) the first popular implementation of AEAD crypto for GPG, but AEAD doesn't interoperate with GPG.
It's very unfortunate PGP/GPG hasn't kept up with the times. Today it has two significant issues:
1. The needed trust model has changed significantly.
2. It's not reusable enough.
1. The original PGP mostly dealt with direct person-to-person relationships. Alice and Bob needed to safely communicate. Perhaps Bob could vouch for Carol. But that was the intended model: closely related groups, with maybe a person trusted to act as an introducer. Today our needs are different, and we need to securely communicate with people we never met, or to verify their signatures. Any Linux system contains thousands of packages, which were worked on by many thousands of developers, one may need to communicate with securely at some point. Eg, I want to verify the GPG signature on the Tor browser, but I never met anyone on the team, and how do I know who knows the team?
My personal network actually extends very far. I did the FOSDEM key signing party several times, so my theoretical reach is enormous. But it can only be achived by hacking around GPG's trust model. I need to figure out by hand a path between me and Tor, download the keys, and manually tell GPG I trust each key's signature. This isn't convenient, user friendly, not as safe as it could be.
2. GPG is unfortunately stuck in the "Unix Philosophy" era, where you're supposed to just invoke the binary and parse its text output. I believe this crippled GPG's adoption, because it's slow. GPG has to do the whole startup, reading its key databases and so on every single time. Back when GPG support was introduced into KMail many years ago this added a very noticeable delay to viewing any signed message.
And it's sadly still the case. The world badly needs a GPG library, that allows one to skip those startup costs, and to avoid the whole intended model of ~/.gnupg and just let a program do things like interpret in-memory data for any conceivable purpose. This is still badly lacking.
I think you're wrong on point 1. PGP was always very much about communicating with people you don't know. Otherwise you'd just use symmetric encryption. Web of trust is still the best way we have to do that. Newer applications like signal etc are what require each pair of correspondents to exchange keys and "complete the graph".
The difference with more "modern" technologies is just that they let you implicitly trust keys more easily. Web browsers just have you trust hundreds of CAs without even making you aware of them. Verifying keys on messaging apps is unknown and not understood by the vast majority of users.
2. GPG can also trust keys signed by people you trust.
And that's where it ends, normally.
So say you want to verify the signature on the Tor browser. So have you met the key holder directly and signed their key? #1 failed. Do you know anybody who signed the Tor key? #2 failed. You can't properly verify it.
Actually, GPG allows the chain to extend for longer. You can verify a You -> Alice -> Bob -> Carol chain. The problem is that standard GPG doesn't have any easy way for you to find about Bob. You know Alice, you can find the content signed by Carol, but you may lack Bob's key.
You can find it out by hand, by getting Carol's key, finding who signed it, and downloading all those keys hoping somebody you know signed one of those.
That's the easy scenario, if you extend this to another step, that is, You -> Alice -> Bob -> Carol -> Dave, it gets even more annoying.
And besides not being user friendly, it turns out that GPG sucks at dealing with large key databases, so mass-downloading keys in hopes of finding a connection tends to noticeably degrade performance.
>Actually, GPG allows the chain to extend for longer. You can verify a You -> Alice -> Bob -> Carol chain.
But it certainly doesn't mandate it. It's called the web of trust, not the chain of trust. You would never have such a chain of trust in real life so there is no reason to use PGP to create a model that doesn't make sense. Trust is normally quite shallow.
This is a generic rhetorical technique often used against flexible systems. The flexibility is used to create absurd straw men.
Those connections exist on keyservers already, GPG just doesn't have a comfortable way of finding them. For instance if you want to verify M's key, you have:
A -> B -> G -> M
A -> C -> H -> M
A -> D -> H -> M
A -> E -> I -> M
Using multiple paths should allow building some confidence into M's key.
The issue is that in the current model, A has the keys for B, C, D and E, and also M's key since they're trying to check their signature. But the rest of the web that is present on keyservers isn't easily reachable to the user.
By explicitly supporting this model. Recognize that in the modern usage model you often need to find a trust path to some completely random person you've never met, such as the maintainer of some random software program.
And that you need to support a "good enough" trust model for those cases. What do I mean by that? GPG has two separate things: signatures, and trust levels. When you sign somebody's key, GPG wants to know how confident you are that this person will only sign the keys of people who deserve it. Eg, you can trust that Philip Zimmermann will check fingerprints, but your grandma maybe won't, so her signature isn't worth much if anything.
For the first part, GPG needs a path finding service. That is, when dealing with a "You -> Alice -> Bob -> Carol" path, you can give this service your key ID, Carol's key id, and the service tells you whether any path at all exists between those keys. This should be an automatic API that GPG itself can use, like it uses a keyserver. This used to exist, in the form of some random website run by an university professor that now seems gone. It needs to exist as a proper, official webservice.
For the second, in GPG's normal security model, GPG is going to ask you how confident you are in Bob's cryptographic knowledge and signing discipline. And how the heck could you know? You don't know him, he's a friend of Alice's. She should know that. Some way needs to be invented to deal with this, either by having Alice's signature on Bob's key contain a trust level, or by having a mode in which a signature is at least better than nothing, and you can get some confidence rating based on how many paths you can take to reach the destination.
One thing I've noticed is that package signing has a bit of an issue. It lacks metadata about the package.
I'll use archlinux as an example but the same applies to PPA's in Ubuntu:
Say you add a public key of a third party because you want to install a certain package or add a certain repo. The keyring then trusts that key. But it doesn't trust the key for a specific package. It just generally trusts the key.
From a security aspect your system is not compromised, because the security of your base packages now depends on whether random key for third party is compromised. I.e. you can now install backdoored software because that package you wanted to add is compromised.
It's not really limited to linux repos though, imagine you install a driver on windows and it asks you to trust the signing key. That signing key can now be use to put random malicious software on your windows system.
Is there any reason why this metadata couldn't be added to something like pgp other than it requiring a lot of changes in the tooling?
My view is that it's because GPG sucks, and has been used for things it doesn't want to be used for.
GPG's usage model is a commandline tool, used by a person to verify a signature on a file or such. It has a keyring in ~/.gnupg, and importing keys imports them into this keyring. And it really insists on that.
This whole idea of having a key that only applies to a specific package or repository was never intended in its design. When something like a package manager uses GPG it calls the commandline tool and gives it a home directory and keyring somewhere.
What the world really needs is a GPG library. But not GPGme, which just calls the GPG binary and parses the output.
What is needed is one that dispenses with any ideas about how the end-user is supposed to work and just provides primitives to parse keys and messages, verify signatures, etc, and lets the user make decisions about whether to have a global list of keys, or per-repo keys, use a proper database like postgres, or even not store anything at all if all you need is to parse something. But alas, such things are still scarce, and the prevailing model is just hacking GPG into sort of doing a job it doesn't want to do.
There's also some weirdness on the package handling side. Apparently the modern way of doing things in Debian is that you don't sign a package, you sign the repository metadata instead. This may be motivated by the fact that GPG sucks and is slow (because you're calling a binary that loads libraries, parses config file and databases every time, etc), and calling GPG a thousand times during installation would be a significant slowdown.
As anyone who has had to make code interact with GnuPG will attest, I very much agree its interfaces are not ideal to put it mildly. I'm pretty excited, though, about things like Statless OpenPGP CLI (SOP), or Sequoia's CLI f.ex., and several of the other tools referenced up-thread to handle package signatures are also CLI, so I don't think that's an inherent problem.
Regarding packages, apt supports pinning specific keys or keyrings to specific repositories (via the signed-by attribute), as does debsig-verify (which can pin keys or keyrings to specific policies). On Debian, packages get signed by the maintainers (both the source packages, inside the .dsc file, and for the entire upload, inside the .changes file), which get uploaded and then the repository software takes over and signs both source and binary packages in the metaindices. This was made pretty much designed on purpose, and independently of GnuPG CLI's speed or design shortcomings. The repository needs to handle key rotation, due to expiration, algo renewal, security compromises, maintainers leaving the project (and as such their keys not being trusted anymore), etc. Embedding the signatures into the source or binary packages would mean that they would change content, which implies massive mirroring costs, simple digest verification oddities, and similar. Adding detached signatures for each individual source and binary package would make the inode count explode. The metadata still would need to be signed no matter what, and doing either of those per package signing would also make signature update and repository metadata generation and mirroring extremely painful, as you need to be able to do that atomically. In addition the repository needs to be signed as a whole, because it's really a snapshot of a known state, and while it should be fine to mix and match various repositories (at the user request), that should not be the default (at least within a specific repository state).
I wonder how much of the rest of the article will be drowned out by the conclusion:
> It's not only personal freedom at stake. It's national security. The reckless deployment of Huawei 5G infrastructure across Europe has created easy opportunities for Chinese SIGINT. End-to-end encryption products are essential for European national security, to counter a hostile SIGINT environment controlled by China. We must push back hard in policy space to preserve the right to end-to-end encryption.
It's true, though, that much work is needed to bring PGP up to the levels expected of modern crypto tools. Hopefully some of that will happen as a result of the work happening in the IETF:
Some rebuttals to that critique outline that the author doesn't fully understand the arguments Thomas Ptacek laid out, and may have a simplified understanding of PGP:
I'm a huge fan of Phil and his work, and plan to send him a note of thanks, but I think your second sentiment is out-of-date now. Even if we avoid other controversies like Thomas Ptacek's views about the inappropriateness of the e-mail encryption threat model, PGP doesn't support forward secrecy and so it's at least not suitable for instant messaging or TLS (as well as not being integrated into their protocols!).
A serious security product could support encryption that's more relevant to whatever it is that that product does.
I've never seen a realistic threat model where Signal-style forward secrecy actually helps. Suppose a repressive regime captured one dissident can see a bunch of messages between them and other people, but theoretically some crypto nerd might have been able to forge those messages if the dissident has been carefully publishing the material they're supposed to publish and the cryptographer decided to run the forgery toolkit on them and then somehow inject the forged messages onto the arrested guy's device. Do you think that's going to stop the regime from pulling in all the apparent recipients of those messages?
I'm pretty you're thinking of deniability (from OTR) rather than forward secrecy.
The forward secrecy in protocols like Signal allows for things like disappearing messages, which are then actually technically credible (the sender's and recipient's devices literally don't contain any information which would help to reconstruct the contents of their old messages). I think the former isn't really helpful in your scenario, but the latter is.
You're absolutely right. Still, I'd ask the same question about forward secrecy: is there a realistic model where they actually help? My impression is that people would generally keep their chat logs for as long as they're relevant, and so a message can only ever become unrecoverable by an attacker once it's irrelevant, in which case it's likely to also be irrelevant to an attacker. I guess maybe there are cases where someone is picked up and it's found they were also involved in some otherwise successful action years before?
Not all use cases require perfect forward secrecy, so I think PGP can still be useful for some applications (and I tend to enjoy using PGP). I agree however with your point that serious security products do not need to support PGP specifically.
Perfect Forward Secrecy isn't possible with PGP, as with Email as the underlying mechanism there is no direct communication between peers. Email is more of a "fire and forget" mechanism. You can publish your PGP key and get an email from someone you never communicated with before. That can't be done with PFS.
Forward secrecy is not of any real value in most instances of instant messaging as people usually keep their old messages around thus negating it.
Using OpenPGP in the way that TLS is used would negate the advantage of static encryption and would cause the result to be as insecure as TLS. Probably worse as OpenPGP has not required all the band aids that TLS has ended up with.
> Forward secrecy is not of any real value in most instances of instant messaging as people usually keep their old messages around thus negating it.
Both Signal and WhatsApp have disappearing messages as a feature. Signal now allows users to enable this feature by default for new conversations.
Most people keep their old messages because they're only aware of some remote dragnet surveillance threat in democratic countries. I'm sure the situation is different in countries where the surveillance is more offensive, and perceived so by the population.
Encryption is not of any real value if the threat you're describing is 'someone simply has access to all your plaintext messages'. This isn't a meaningful argument against forward secrecy.
The argument is that an attacker that gets your secret key material also gets your saved messages. If you had a more secure way to protect the saved messages then you could of used it to protect the secret key material. It is more or less the same problem.
Unfortunately, I was approached a couple of days later and asked to return the floppy back. My status as an international student did not allow me to have a copy ( arms, munitions export rules, whatever). I felt really upset about it, since I was the only one asked to give my floppy back. I might still have a copy of the floppy in a box in the attic somewhere.