Two things we're particularly proud of at Keybase are (1) that there's no server trust of these proofs, and (2) we pin the entire state of the directory to the bitcoin blockchain, to prevent forking. [1]
In other words, if you ask for Twitter user X's public key, your client can check that proof on twitter itself (rather than trusting that a key server did it for you, like it would with email proofs), and it can trust a hacked/coerced server isn't hiding something specifically from you, such as a revocation. The latter is particularly hard to protect against. It also gets timestamping: you know the world has been seeing the same public key for Twitter user X for many months or years.
[update] I gave mey a bunch more invitations, but I can be hit up on twitter (https://keybase.io/chris) for HN users wishing to jump our invitation queue, which is large.
Cool site! Max Krohn (https://keybase.io/max) and I are meeting with some people working on various PGP projects in Germany in April, and one of the things on our personal agenda is the ideal future of key distribution. We don't really want to be a sole place to look up these keybase-style social media proofs. We also don't think they belong inside the keys themselves.
One complication: looking up a key by email and trusting 3rd party verifications is philosophically pretty different from what Keybase is doing. So we have to figure out how to resolve this. For example, we don't even have an email based lookup at all (!), because we have no way of letting a client verify it's true. I don't know if we can be convinced to change this. We're looking forward to Yahoo's and GMail's work on their E2E projects because it may help verifying email addresses publicly.
And to be clear: we're not in the email business, so we want Keybase-style key proofs to be useful to mail clients like Whiteout. We'd like to work well with everyone.
Makes sense. If possible, I'd like to request at least gossiping with servers in the SKS pool. Right now, when someone signs my key and sends it to the pool via gpg --send-key, it doesn't get updated in keybase :(
Second, for public keys that are signed by other fingerprints in keybase, it would be nice to have those listed in my trackers list.
Finally, for people who upload a public key to keybase, it would be great if that would gossip to the pool so I could get it via gpg --recv-key.
Felix and me will be at the event in April as well. So we can chat there.
We think keybase's concept is great and also look forward to what the E-2-E developers are coming up with for certificate transparency. Our only concern is, that these concepts are not open and backwards compatible to current key server solutions. This would create an island... and we've been sitting on our own small island up until now with our closed key server solution.
Sure if Google and Yahoo launch their concept it might exceed any marketshare that HKP might have had. But unless there is an open standard where small guys like us can latch onto, it's going to be hard to get vendors on board.
This is fascinating concept, especially since I've always felt that my PGP key was sitting out there mostly useless since no one I know personally signs their emails or even knows someone who signs emails. However, wouldn't the weak part here be if my twitter (or other public) account was compromised somehow? Or if someone just impersonated me and sent a "unique tweet" pretending to be me?
How does HN feel about that identity aspect? Is it supposed to replaced key signing or just augment it?
I'm frankly not sure I fully get my head around the concept, even after reading this page:
This is a good question: generally if all you know about someone is their twitter name, and that's it, then a compromise of their twitter account could lead to a compromise of their key announcement.
Even this case is mitigated because of the timestamping: a compromise would have to be a public compromise on an ongoing basis, and everyone in the world would have to see the same compromise, including the alleged twitter holder. Tracker statements add to the timestamping: when someone runs the Keybase client and tracks them, they sign a statement saying on date X twitter user Y had that public key and they checked it.
The second advantage is that a person is typically a sum of many identities. Consider a known developer who signs code using their Keybase announced key. (Let's say Jeremy Ashkenas does (does he?) - anyway, he's on Keybase as https://keybase.io/jashkenas). His public key is announced on his Github and Twitter accounts, both of which are known. He signs those statements with the matching private key. If you ask Keybase for his key, you get 3 things: Keybase telling you what it is, Twitter agreeing with you, Github agreeing with you, and bitcoin telling you everyone in the world has gotten the same answer for those 3 accounts for the last few months. And CLI tracker statements timestamping, too.
So it becomes its own "web of trust" in a sense? Every time there is activity, it's another point for "yeah, this person is who they say they are, look at all the stuff they've verified"? That's an interesting concept, at least for casual communication (the kind I might want to keep out of the ears of Google or Twitter, probably not the kind I'd want to keep out of the ears of the government). Is there any plan to incorporate traditional keyservers into the "web" as a component in the trust model?
Additionally, doesn't the user signing the statement with his or her private key mean that you need to have his private key to really believe it? I notice on the website it states that encryption/decryption is done with client encrypted keys. If it's client encrypted presumably the signing is happening at the client and an announcement is sent to keybase stating as such. How can you trust the data your getting back from the client is trustworthy or not also compromised?
Forgive me if I'm being boneheaded here, I'm just trying to grasp this so I can say, "Yes, that makes enough sense."
In my experience, key distribution is the easiest thing about PGP / GPG. EnigMail and most other clients can already query key servers easily. Enigmail routinely asks me to "download missing keys", and if my recipient's key is on a keyserver, it downloads them. In fact, the PGP global directory (https://keyserver.pgp.com/vkd/GetWelcomeScreen.event) already seems to have all the features that are missing from the whiteout server (SSL, scalability, e-mail validation, etc).
The biggest adoption problem around key management that I see is getting people to generate them securely, then integrating the keys across multiple devices and multiple clients. Maybe this whiteout mail app thing is supposed to be the part of this that makes key management "invisible", but I don't see why they can't use the existing key distribution infrastructure.
The problem is if you look at the key server and find there are two keys - one legitimate, one posted by an adversary (who has read access to the recipient's e-mail) - both have a few signatures, but the signatories are several degrees away from you in the web of trust.
Someone made a fake key for each of the participants in a particular keysigning party in October 2013 (including me) and uploaded the fake keys to keyservers. Because the creation date of the fake key for me is newer than the creation date of my real key, apparently Enigmail is suggesting it to people and they're choosing to use it, despite the lack of signatures.
More than a dozen different people have now sent me encrypted mail that I couldn't read because they selected the fake key on the keyservers instead of my real key.
That makes me think that the web of trust model isn't working out very well under active attack -- at least, over a dozen people failed to actually use the web of trust to avoid falling victim to this attack.
There are also seven keys for Erinn Clark (who signs Tor Browser Bundle releases for the Tor Project) on the keyservers; if I remember correctly, three are real and four are fake.
This is important: since PGP is also used for signing software, and in that world, this kind of compromise can be even more dangerous. The Tor example is great. Also, check out how many keys are on MIT's PGP keyserver for Gavin Andresen of the bitcoin foundation.
Back in 2013 I downloaded bitcoind and spent like an hour just trying to figure out if it was legit or not.
I'm sorry my homepage is out of date; thanks for the reminder. It seems like it's been a decade or so since I updated it.
I normally check signatures when downloading a new key, particularly as a way of distinguishing between multiple keys available on a keyserver. But I don't have a way to force other people who are writing to me to do that, and apparently at least the Enigmail users often don't.
Edit: Erinn is a more cautious PGP user than I am (with an extraordinarily important key!), but I expect she also has no way of forcing people to check that they have the right key when e-mailing her.
Including your key signature everywhere you post your email address (homepage, business card, email signature, etc.) is a good practice. It's not perfect, but it's better than teaching users to go straight to a keyserver.
I check up on you from time to time. Virtunova's been offline for ages as well.
I'm kicking around ways of making email more reliable, one option that occurs to me is key negotiation at transmit time, or as part of the delivery process. That is: a user's home mailserver would be key-aware. Though that too is subject to skulduggery.
The point of Web of Trust is to only trust keys that other people you know have also signed. Everything else is garbage until proven otherwise.
Key servers are untrustworthy because anyone can upload random shit to them.
Trying to shift WoT to a third party is trying to get something for free that doesn't emphasize solving the problem: getting everyone you know signing keys of only other people they know.
Agree with both you and GP. According to this blog post, they're not solving the trust issue, they're just working around the issue of a webmail app not being able to query keyservers:
> We couldn’t make cross origin requests to HKP
> servers from the web version of our app.
It's not entirely clear if the REST proxy for hkp they've created is open source -- if not it's almost useless as far as I can see. If I were to trust my secret key to the browser/a browser "app", I'd have to download the app from a web-server I trust and control (ie: my own server). And I'd certainly not want to load anything from any other server. I might trust a sanitizing REST proxy under my control to go and get gpg-keys.
Either way, while Trust-on-first-use is certainly pragmatic -- if that's all you're going to do, then there's nothing wrong with most email clients that have gpg-support -- most will do that gladly! One might argue that in addition to downloading keys, they should be signed as "marginally trusted" on first use (or maybe there should be another trust-level in gpg/pgp: "TOFU" -- to avoid anyone else mistakenly trusting the key).
I think semi-structured "CAs" for gpg would be a much better solution: allow banks/post offices/DMV to sign and upload a gpg-key -- demanding that such keys had an attached photo, and that the institution verified the ID/name at the time of signing (with the same level of trust that is usually demanded by institution that make valid IDs: a valid passport etc).
If I could go to the bank, and get a copy of their key, and sign that (and have them sign mine) -- and so have a good pathway to "everyone's" key -- that'd be great. And unlike with browser CAs - I could chose who to trust to delegate IDs. As I've mentioned elsewhere I think cacert.org should also sign GPG-keys (just as a convenience -- you could of course sign your gpg-key with your CA-cert -- but that would require manual intervention to verify the chain of trust, unlike a signature by a trusted gpg-key).
As for the "Snowden" use-case -- TOFU is fine. Someone sends you signed and encrypted data, you can assume you're talking to whoever has the key. Maybe you can't be sure it's not the FBI/CIA/NSA/GCHQ setting you up -- but does it really matter? You've already started a dialogue, and that's probably enough to put you away for life...
They are using the existing key distribution infrastructure -- the whole point is that they'll go fetch keys for you from a selection of HKP servers if they don't have one on theirs with a verified e-mail address.
Full disclosure - this is my employer and a product I work on.
For enterprises, and folks outside the enterprise communicating with them, commercial products exist which effectively allow any sender to use any email address as-if it were a public key, even if the recipient hasn't set up any sort of encryption yet.
While compatible with PGP, this is not PGP encryption, but I figured this may be of interest to readers in this thread.
White papers can be found at the bottom of this product info page:
I think the 99% of the time it's not an issue argument is invalid. Encryption isn't really necessary for 99% of people at any given time anyways. It's just that you don't know when you're in the 99 and when you're in the 1 percent, and a mistake in that 1 percent of the time is crucial, that's why we try to do it 100% of the time.
Well, and there's the availability concern -- if it's a critical or time-sensitive message, sending it to the right address with the wrong encryption key is the same as the message getting dropped in transit, which is hazardous.
Heck, even with a verified key, I've seen it at least once where a critical e-mail came in to a shared key but nobody was around who could decrypt it. When PGP is used only in special circumstances, the e-mail verification almost needs to be refreshed periodically to make sure the owner continues to have control of the account and the key.
If you want to avoid being in a dragnet, just requiring TLS using policies on your SMTP server is enough. That's what many organizations do already.
If you just want "encryption" as a checkbox, then just use Gmail with everyone and your data isn't going anywhere outside Google anyways.
I'm wondering who the users are that have threat models that make them concerned about attackers able to compromise, say, Google, but not capable enough of getting fake keys out.
This article in particular says fake keys aren't important because the attacker needs access to the mailbox. Well if they don't have access, you don't need encryption in the first place!
Sure, there's a marginal improvement (ignoring all the downsides of encryption, like forgetting your passphrase means losing all data) but it doesn't seem to be useful for actual targeted users.
End to end encryption is safe, mail server to mail server leaves your mail unencrypted on an unknown number of servers. If those are in the US or a similarly crazy country, they are just one letter away from being read.
No, I'm suggesting that the tradeoffs if PGP are unacceptable for most users, and that using TLS between mail servers (even on private fiber) would be a good way to stop full on collection.
The NSA story is a nice addition, but tales of the FBI carnivore system reading all email by connecting at big interexchanges is decades old, yet no one is taking even that part seriously.
> Even with automatic key lookup, users can later always navigate to the contacts menu and verify a recipient’s key fingerprint if they need to.
Okay, how about making this (optional step) way easier too. Show the fingerprint somewhere on-screen everytime you send a message. (Maybe flagged for extra notice the first time the key is imported?)
Maybe with a little (i) icon next to the fingerprint, clicking on it explains what it is, and how for top confidence the fingerprint should be checked with the owner directly.
A good UI makes it easy for the unsophisticated, but provides cues to give the user an easy and gradual path to being more sophisticated too (and makes being more sophisticated as easy as possible too).
Rendering the fingerprint as an image (a'la github's default portraits) would help; we can more easily see differences in images than we can strings of numbers.
1. From reading their post, it seems like they made their own REST API for the key server. So it would be nice if they open sourced the service behind it, to alleviate fears of lock in.
TXT is much more available in DNS management interfaces than SRV, so there's a higher chance that you can install such a server on those shared hosting instances.
Some time ago I implemented "invisible" keys for http://privacyapp.io on iOS, with support for HKP and keybase.io API. It works quite well. The only problem is that keyservers from SKS pool have different versions of software and response can be randomly broken for the very same request.
It also needs to be baked into contact management apps so that exchanging your credentials becomes no different than sending your contact card via Bluetooth, NFC etc. I suspect that this would require something a bit more robust than vCard to package everything up in a nice bundle.
vCard (to my knowledge) doesn't have anyway of validating the integrity of the card data so can be altered without knowledge. You would at least need to add some kind of wrapper so that you could add a signed checksum.
In order to verify the signature, you'd have to get their public key. Since that's where this all started, we're just in a loop of untrustworthy ways to get someone's public key.
The key discovery part seems unobjectionable, and vastly cleaner than what keybase.io does.
Looking at their design for key sync[0], though, maybe I'm just dense, but I swear I read the article, and I still can't tell -- what's the advantage of this complicated thing with a symmetric key over just protecting the private key with a strong passphrase and sticking it in Dropbox?
We're currently in the process of simplifying the key sync spec. The new version will store your private key encrypted with a strong random passphrase in IMAP. So it's similar to your dropbox proposal, but with a UX that leads users along the way.
The real goal here is not to provide state-level security; it is to increase the cost of massive surveillance by deploying the easy parts of PGP everywhere. The people who are highly dependent on security will run through the whole procedure anyway, they won't just rely on whiteout. "Johnny", on the other end of the spectrum, will see the privacy of his communication greatly improved againts passive attacks.
Much like HTTPS Everywhere, it is not enough to guarantee that your messages will be secure against a sufficiently-funded and determined attacker. It puts some power back to the people to enjoy better privacy of communication.
When the attitude is to dismiss every "corner case" that arises, suddenly you find yourself not doing any security work. After all, every attack is a corner case. 99% of users will never be attacked. Why spend all this energy on things most users will never need?
The answer is to increase security by having widely adopted imperfect security rather than supposedly perfect but also imperfect[0] security with marginal adoption.
Having a secure initial message exchange is difficult and will always be difficult. Traditional GPG UIs are so obsessed with trying to solve that problem that they ignore the 99% case, in which something like trust-on-first-use is perfectly fine. The fact is, on average, users will be much safer with a TOFU scheme without any key servers or web of trust at all, as long as that scheme is sufficiently unobtrusive so that it actually manages to get adopted.
The sad truth is that I actually use a mail client with GPG support, but I basically never exchange encrypted email for two reasons: It's not automatic, and the vast majority of users don't have GPG support installed anyway - and I'm not going to push others towards using encrypted email as long as it's harder than "install this plugin and you're done". I have other causes to spend mental energy on.
[0] See other threads about problems with fake keys in the web of trust.
Yes, that is probably the most important part of making a crypto solution actually work in practice. You can't solve all of the problems at once. Trying to solve everything has been shown to have a predictable outcome: nobody uses it and we're stuck in plaintext.
By deploying a partial solution that doesn't attempt to fix all of the problems, we can at least defend against some attacks. More importantly, a partial solution is educational. It introduces the idea of managing keys and wrapping your communications in crypto, which is a new concept to a lot of people. You might consider it "training wheels" for crypto.
Later on - when more people are used to these ideas - it will be a lot easier to upgrade to other methods that defend against the rest of the attack classes. Also, there is a bonus benefit: it is probably a lot easier to deploy a proper PGP/GPG web of trust when you have an existing infrastructure of keys that could be used as another layer of verification.
They're looking at key security features that mitigate attacks and saying "That's a corner case, ignore it".
From a UX perspective, that's great. It lets you simplify and remove a lot of complexity. The catch is that every attack scenario is a corner case. As a result, users get exposed to many of the same vulnerabilities that the technology is supposed to be enabling them to guard against.
There's nothing novel - or, I submit, interesting - about the idea of trading off security to make things slicker for the user.
The biggest problem in widespread adoption of PGP/GPG is client support.
Case in point. I've been looking into sizes of various online communities, and decided to look for any hard statistics on the actual traffic levels of Usenet back in the day. So I looked up the old admins. One of those is Eugene "Spaff" Spafford. Also known as something of a security expert -- he wrote the book on it: Practical Unix and Internet Security (PUIS). I've a copy on my shelf.
There's a limit to how many keys keyservers will return, but 10-20 seems a generally safe bet.
Wrote my brief email using mutt (which was designed as a reference case for MIME-encoded PGP email), and awaited a response.
I no longer use PGP. Please send me readable text.
Yes, from Spaff.
So I sent him the unencrypted text, he found my Usenet questions interesting. But I was curious about his lack of use of encryption.
He responded (and gave me permission to quote him):
I don’t have a convenient PGP implementation for my Mac. I used
to use the commercial version, but it requires Java and that’s a
huge risk. GPG implementations require lots of software I don’t
entirely trust, plus there is no nice interface to my email.
So yeah. The guy who wrote the book on Unix and Internet security, stymied by lack of a decent client.
Until mainstream vendors are integrating this, we're going to be stuck.
There are other issues.
If I've got multiple devices, I may well wish to use different keys on them. Which means I've got a key-management issue of having people use the right key(s) on messages. There's the problem of key loss (you lose access to everything encrypted with it). There's the challenge of inputting long passphrases on Mobile devices (I'd far prefer a OTP / keyfob type solution or other form of semi-physical security, plus a shorter passphrase). There's device-based backdoors and other exploits. Etc., etc.
I've used PGP/GPG for nearly two decades. It remains something of a pain to deal with....
Does keys.whiteout.io gossip with other HKP keyservers? If so, is there any documentation on the gossip protocol? I can't seem to find any, and the SKS keyserver is the only implementation of the gossip protocol that I can find.
Or just use S/MIME which is baked into most email clients? Of course, handing out keys is an issue, but in the world of social media why not just attach one's key to one's public profile? Facebook, linkedin, etc. Not sure why S/MIME doesn't get any love. It works, its the most common email encryption scheme, and typically you don't need a third party application or command-line-fu to get it to work. I've seen the dimmest of office workers deal with it everyday.
A couple of the dimmest of office workers decided to use S/MIME - just because.
We're using Office 365. I prefer OWA to any fat client. OWA cannot (with various failure modes, but let's just stick with "cannot") show S/MIME mails.
A coworker sends mails that I cannot read in a web interface to the same mail store that he accessed to submit that amazing piece of art in the first place.
Edit: I also have invites available if you are interested in checking it out. Contact me via my HN profile