"All we can do is tell people that NIST are the ones in the room
making the decisions, but if you don't believe us, there's no way
you could verify that without being inside NIST" says Moody.
There's our problem - right there!
If a body as important as NIST is not so utterly transparent that any
random interested person cannot comb through every meeting, memo, and
coffee break conversation then it needs disbanding and replacing with
something that properly serves the public.
We have bastardised technology to create a world of panopticonic
surveillance, and then misused it by scrutinising the private lives of
simple citizens.
This is arse-backwards. If monitoring and auditing technology has any
legitimate use the only people who should morally (though willingly)
give-up some of their privacy are those that serve in public, in our
parliaments, councils, congress, government agencies and standards
bodies.
> All we can do is tell people
No. You can prove it, and if you cannot, step aside for leadership
better suited to serving the public interest.
I'm now picturing a slightly different government to ours, where the oversight bodies have the additional function of making sure officials don't talk to one another outside of recorded meetings under the public's eye.
It seems like a huge burden. It is the kind of thing, though, that in a parallel universe would make total sense: our representatives should be beholden to us.
> I'm unconvinced that private, smoke-filled backrooms don't have an
essential place as the grease that keeps things running well.
I hear that, and there's a case for it. Diplomacy, maneuvering and negotiation
require secrets and enclaves.
So to allow for that you need a few things;
- Strict official records of affairs
- Strong penalties for fraud, malinfluence, intimidation
- Whistleblower protection
The last of these essential checks-and-balances has gone to shit our
culture. Even if we pardoned Edward Snowden and made him a "hero of
democracy" tomorrow, it's still a mountain of work to restore the
essential sense of civic responsibility, patriotism and duty that
allows those people who discover or witness corruption to step-up and
challenge it safe in the knowledge that the law and common morality
are on their side.
Billions (the tv show), of all places, made a somewhat similar argument in favor of post-hoc investigations, in the last episode.
Record and share everything immediately = no room for deals
Record nothing = too much room for corruption
So we land at... record everything + only review with just cause + strict whistleblower protections.
Which seems a nice splitting of the matter, but requires a strong, independent third party (e.g. judicial branch) to arbitrate access requests. With tremendous pressure and incentives to breach that limit.
Many governments do have laws like this, called "sunshine laws". Enforcing them can be difficult though, and often enough they fail to achieve the transparency that is their goal while also substantially hindering process.
Modern panopticon level surveillance is not deployed through weakening encryption. It's just built right into platforms and apps and people willfully install it because it gives them free services and addictive social media feeds.
You don't need to weaken encryption to spy on people. You just have to give them a dancing bunny and to see the dancing bunny they must say yes to "allow access to contacts" and "allow access to camera" and "allow access to microphone" and "allow access to documents" and ...
For the higher-brow version replace dancing bunny with free service.
In addition the more we adopt and make use of cloud command and control architectures the more surveilled we become, because it becomes trivial for anyone with access to the cloud provider's internals to tap everyone's behavior. This could be done with or without the knowledge of the provider itself. The more such services we use the more data points we are surrendering, and these can be aggregated to provide quite a lot of information about us in near-real-time.
> In addition the more we adopt and make use of cloud command and
control architectures the more surveilled we become, because it
becomes trivial for anyone with access to the cloud provider's
internals to tap everyone's behavior.
Every US government office - at both the state and federal - levels has records keeping requirements. It’s the reason we can submit FOIA requests that return with data from the 30s.
Adding to this FOIA doesn't guarantee useful answers. Having gone through the process whilst in the military I found a few loopholes. They can give a bloated answer with boxes and boxes of useless mostly unrelated information. They did this to me. They can also decline to answer and just say it's classified. They can also just decline to answer through stalling tactics. It served my purpose and that was to stall them from collecting DNA.
I'd say the haphazard destruction of Blackberries and iPads with hammers is pretty explicit evidence of how bad State Dept IT policy and execution was.
Maybe I've worked in large corporations too much, but my first question when I see policy violations is not "How is this person conspiring?" but rather "What made following the official policy difficult? And how can we fix that?"
Also, the State Dept seems like exactly the sort of place policy violations become culturally routine: non-technical experts, doing work that is arguably "more important", with IT seen as a cost rather than profit center.
Or, she didn't even think about records retention or device security, in the same way that most politicians/executives don't, assuming it was handled by someone.
And it was in fact not, because there was no functioning policy enforcement at State.
How is that in any way like working in an open kitchen?
Anyway, interesting take that people working for the public should have no right to privacy in any way. Not sure you will find many people for such work, though.
You don't have privacy in what you do at work when it comes to your employer. That's why it's called privacy, it relates to private and personal things. Your work for someone else is, by definition, not private.
You don't get to push stuff into production and play coy about what you're pushing, do you? Why would that change when the employer is the public?
Putting aside finer points on privacy at work (depending on jurisdiction not so black and white), recording and making publicly available all work related communication goes way beyond usual surveillance at work places even in regulated communication settings. I have at least never worked in any place that would insist on recording any work related conversations with a co-worker during the morning commute (and to have that publicly available).
As it would be based on distrust, you'd have to record everything everywhere to avoid people not evading it.
Even innocently, if, for example, two employees meet at home for dinner with their respective families and there is talk about work - needs to be recorded in that view. Or people car sharing on the way to work - recorded. Any work related communication is very very broad.
If there's something wrong with the work they do in the kitchen, it doesn't matter how that came to be. If there is nothing wrong with it, it doesn't matter what else (other than something that would make them do something they shouldn't in the kitchen) they talk about outside of the kitchen.
The solution can't possibly be not even knowing the difference between someone putting salt or toe nails into a pan -- but ensuring cooks don't talk to each other, so even though we have no clue what they're doing or should be doing, we magically know nothing bad is going on.
How do we know the cooks aren't holding out on a better recipe for Alfredo sauce that uses more salt? The recipe they're using seems adequate, but there's a better formula they could be using by adding salt, except they've all gotten together outside of work to conspire against us and say there's nothing wrong with the amount of salt they're using. Who knows who's paying them to say this? We need to follow them everywhere, see who they see, read what they write, audit their bank records. We need proof.
I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
In order to successfully do this, NIST needs to maintain a very large bank of social capital and industry trust that it can spend on very narrow issues.
But over the years there have been enough strange things (Dual EC DRBG being the most notorious) that that trust, at least when it comes to crypto design, simply isn't there. My perception is that newer ECC standards promoted by NIST have been trusted substantially less than AES was when it was released, and I can think of a number of major issues over the years that would lead to this distrust.
The inevitable outcome is that NIST loses much of its influence on the industry, which certainly is not in its own interest.
Everyone also discounts the other reason NIST (with NSA behind the scenes) might be shifty -- they know of a mathematical or computational exploit class that no one else does.
And therefore want to do things-which-seem-pointless-to-everyone-else to an algorithm to guard against it.
Without disclosing what "it" is.
Everyone's quick to jump to the "NSA is weakening algorithms" explanation, but there's both historical and practical precedent for the strengthening alternative.
After all, if the US government and military use a NIST-standardized algorithm too... how is using one with known flaws good for the NSA? They have a dual mission.
>I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
That's an incredibly charitable version of their point of view. How's this for their POV: They're angry that they can't see every single piece of communications, and they think they can get away with weakening encryption because nobody can stop them legally (because the proof is classified), and nobody's going to stop them by any other avenue either.
> view it from NIST's (/ NSA's) perspective for the sake of
argument. Maybe there's a specific threat where NIST (or presumably
the NSA) believes it has a mandate to insert a backdoor.
Without any /sarcasm tags I have to take that on face value, and
frankly there are few words to fully describe what a colossally stupid
idea (not your idea, I am sure) that is. Belief in containable
backdoors is the height of naivety and recklessly playing fast and
loose with everyone's personal security, our entire economy and
national security.
That is to say, even taking Hollywood Terror Plots into consideration
[0], I don't believe there is ever a "mandate to insert a backdoor".
> In order to successfully do this, NIST needs to maintain a very
large bank of social capital and industry trust that it can spend on
very narrow issues.
Having some "trust to burn" is great for lone operatives, undercover
mercs, double agents and crooks that John le Carre described as
fugitives living by the seat of expedient alliances and fast
goodbyes. Fine if you can disappear tomorrow, reinvent yourself and
pop up somewhere else anew.
But absolutely no use for institutions holding on to any hope for
permanence and the power that brings.
> The inevitable outcome is that NIST loses much of its influence on
the industry, which certainly is not in its own interest.
Exactly this. And corrosion of institutional trust is a massive
loss. Not for NIST or a bunch of corrupt academics who'd stop getting
brown envelopes to stuff their pockets, but for the entire world.
But since you obliquely raise an interesting question... what is
NIST's "interest" here?
Surely we're not saying that by spending trust "on very narrow issues"
it's ultimate ploy is to deceive, defect and double-cross everything
the public believe it was created to protect? [1]
I'm all for the game, subterfuge and craft, but sometimes you just
bump up against the brute reality of principles and this is one of
those cases. Backdoors always cost you more than you ever thought
you'd save, and I've always assumed the people at a place like NIST
are smart enough to know that.
> Belief in containable backdoors is the height of naivety
What if it is acceptable for potential enemies to (eventually) also have access to that backdoor, and your goal in providing the backdoor is just to give the masses a false belief that they can communicate secretly?
Obviously those in the know would not use the flawed system, but instead would have a similar/better one without the intentional flaws.
The fact that NIST is not transparent is enough to assume that anything related to cryptography that NIST touches is compromised.
Frankly, I would assume any modern encryption is compromised by default - the gamble is just in who compromised it and how likely it would be that they want access to your data.
NIST standardized AES and SHA3, two designs nobody believes are compromised. The reason people trust AES and SHA3 is that they're the products of academic competitions that NIST refereed, rather than designs that NSA produced, as was the case with earlier standards. CRYSTALS-Kyber is, like AES and SHA3, the product of an academic competition that NIST simply refereed.
A competition is the perfect way to subvert a standard. A competition looks 'open', but in fact you can 'collaborate' with any team to make your weakened encryption and then persuade the judging panel to rate it highly.
But the competition process looks weird to me. Aparently it's not like a sports fixture, where the rules are set before the competition, and the referee just enforces the rules; this referee adjusts the rules while the competition is underway.
NIST has form for juking the standards, or at least for letting the NSA juke them. If they're not completely transparent, then any standard they recommend is open to question, which isn't good for a standard.
The original argument is not "they created encryption that isn't broken before". The argument is "encryption created by competitions that are only refereed by NIST is trustworthy"
Slow? Fastest in HW, and comparable performance in SW. Moreover if you take into account security hardening, SHA3 is easier to protect than alternatives.
SHA-2 is has more reliable HW acceleration from what I’ve seen.
SHA-1 SW according to smhasher is 350mib/s as is MD5, so never use MD5 as SHA-1 is always stronger (sha2 supposedly is 150mib/s). Hw accelerated SHA-1 and SHA-2 are both ~1.5 Gib/s and on x86 HW acceleration is always available.
Blake3 is the most interesting because it’s competitive with SHA2 even without HW acceleration. I wonder how it would fare with HW acceleration.
First, each Blake version was written by a totally different authors AFAICT so while each version is faster, making a faster construction was a totally different team. I don't see why you're putting so much confidence that the original SHA-2 team could come up with a faster hash function.
FWIW, SHA-3 is slower than SHA-2 although of course SHA-3 is a totally different construction from SHA-2 by design.
As a non cryptographer this whole conversation chain has me confused. I though it was desirable for a good hashing algorithm to be slow to make brute force difficult.
Yeah this is a super common point of confusion. You need to worry about slowing down a brute force attacker when you're trying to protect a low-entropy secret, which basically always means user passwords. But when your secrets are big enough, like 128 bits, brute force becomes impossible just by virtue of the size of the search space. So for most cryptographic applications, it's the size of your key (and the fact that your hash or cipher doesn't _leak_ the key) that's protecting you from brute force, not the amount of time a single attempt takes.
Yeah SHA3 isn't directly for password hashing. You should use a memory strong PBKDF (password based key derivation function) like Argon2, bcrypt, or scrypt. These functions are all constructions that run the hash many times, so the underlying hash speed is irrelevant
For high entropy inputs, like for a HMAC signature, you want the hash to be fast because its practically impossible to brute force the 256bit input key, and you often apply this to large inputs.
That's true for passwords (where you don't really use raw SHA or something, but rather something like bcrypt which is intentionally slow), not necessarily for all uses of passwords. E.g. computing a SHA sum of some binary doesn't need to be slow, it just has to be practically impossible to create a collision (which is what's required of every good cryptographic hash function).
Blake2 was not created (December 2012) until after the SHA-3 competition, which ended on October 2012 (Keccak being the winner). It was Blake1 that was entered. Blake3 was released in 2020.
I'm sure a Keccak2/3 could have also been better than the original Keccak1, but that was not available either.
The American people- who are the only ones who matter- want to live in a superpower.
Everything America does is in service of maintaining its position as the hegemonic player. The US intelligence agencies have infiltrated every university and tech company since forever. It's their job.
Sounds like it's time that academia form a crypto equivalent of NIST amongst universities so they can put out transparent versions of new cryptographic algorithms that can be traced back to their birth so that other cryptographers can look for holes, if NIST is unwilling to be open about their processes.
Why aren't other participants in the competition --- most of them didn't win! --- saying the same thing? Why are the only two kinds of people making this argument a contest loser writing inscrutable 50,000 word manifestos and people on message boards who haven't followed any of the work in this field?
The article is a bit weird, so here's my summary of the situation, as someone in the security field:
- Berstein, an extremely esteemed security researcher[0], published a long blog post last week[1] criticizing NIST's standardization process for new Post-Quantum-Crypto algorithms. He is focusing on the selection of Key Encapsulation Mechanisms (think TLS key exchange). Two big options are Kyber and NTRU (coauthored by Berstein).
- His main complaint is that NIST is playing fast and loose with the selection process, and had disqualified a fast NTRU variant due to barely not meeting a certain security threshold. The missing variant makes NTRU look slower and less flexible than it actually is.
- Meanwhile, NIST accepted a similar fast Kyber variant based on shaky assumptions. Berstein argues at length that it doesn't meet the security threshold either and should be disqualified. Funnily, NIST used Berstein's own research in (seemingly) incorrect fashion to argue for Kyber's security.
- There's an air of impropriety, as if NIST was favoring one algorithm over the other, for unknown reasons. And in the beginning of the post, Berstein shows the results of his recent lawsuit to reveal more information about the internal NIST process: it seems that NIST and NSA met more often than previously thought.
My interpretation leans more towards NIST making an internal mistake in evaluating the algorithms, rather than NSA pushing its agenda. One could argue that Berstein is sour that his algorithm might not be picked, and is trying underhanded tactics. On the other hand, he does have excellent reputation, and convincingly argues that NIST made an important mistake and is not transparent enough.
Because Dual_EC_DRBG was very heavy handed. It was driven by NSA itself (and based on a paper named "Kleptography"!); the backdoor was obvious; and they had to ~bribe~ monetarily incentivize companies to actually implement and use it.
Meanwhile, both NTRU and Kyber are lattice-based, and their designs came from honest attempts. To be an NSA effort, there would need to exist an exploitable flaw in Kyber, but not NTRU, known only to the NSA. And it's not like NTRU as a whole got disqualified; only the fastest variant did.
That's the problem with spy agencies, you never know what they are capable of. But if it was an NSA effort, it would be, by far, the most subtle one uncovered so far.
There is definitely a selection bias if judging 'subtlety of NSA activities' by only examining 'NSA activities that were unsubtle enough to be discovered'.
There’s no reason to believe that the NSA doesn’t learn and evolve from past efforts.
Changing rules on the fly and improperly applying said rules could be a way to select a weak option you can break while having stronger plausible deniability than what happened with Dual_EC_DRBG (which btw wasn’t actually confirmed until the Snowden leak). So here’s someone claiming NIST is being suspicious in how the algorithm selection happened. The rules really need to be set in stone at the beginning of the competition or before the phases at least. And you can’t pick diametrically opposed rule sets between phases (as happened if you read Bernstein’s letter), only tweaks.
On the other hand, DES is an example of where people were sure that NSA persuaded IBM to weaken it but, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES". <https://www.cnet.com/news/privacy/saluting-the-data-encrypti...>
NSA did persuade them to weaken DES by shortening the key size. The "magic S-boxes" were chosen to be resistant to differential cryptanalysis (which was successfully kept secret for decades to come) but that doesn't change the fact NSA had the means to break DES by brute force.
If “an academic is sour they didn’t receive credit” and “an academic wants to help the world” are both on the menu, you should always linger over the first possibility. Hell, I’m an academic and you should consider it in regards to this post.
I've been in rooms watching cryptographers trying to figure out what exactly it is Bernstein was saying with that blog post for the past week, and I do not believe that Matthew Sparkes at The New Scientist understands it any better than they do. Since Sparkes doesn't have any direct reporting from Bernstein, and nobody here cares about the NIST quotes, the right thing to do here is to treat this story as a dupe.
Is it? Can you summarize it? I'm asking seriously.
This is not his style, for what it's worth, at least not for standalone long-form writing. His most influential cryptography writing is concise and lucid.
In between a bunch of conspiratorial hinting, djb argues that KYBER-512 is weaker than NIST claims.
To make that argument, he points out a fairly egregious math mistake (the whole "2^40+2^40" bit) and then shows that NIST was inconsistent in applying the rules of the contest it refereed.
He also offers an explanation for why NIST would be so inconsistent about it, namely that they were influenced to pick KYBER, even if it wasn't the best candidate.
--
My personal takeaway was that he was both being a sore loser but also that KYBER-512 is weaker than it should be, weaker than it is claimed to be and that for some reason NIST still wanted it to win.
Makes me skeptical about KYBER-512 (but not larger sizes) and reinforces my worry that NIST can be influenced to pick less-than-optimal algorithms.
But then, I'm not a cryptographer and in the lucky situation where for any application I encounter, I can just go for KYBER-768 or 1024 or NTRU and just be fine - I don't have to understand this situation perfectly.
Hope you get some value from this outside perspective.
I’ve not followed the PQC competition very closely, but I don’t think djb’s arguments significantly impact whether you should use KYBER-512. From my reading, as someone with a decent amount of crypto knowledge, all the evidence suggests that it is more than secure enough. The rest of the stuff is at the level of “submit an erratum”, not “omg cancel the whole thing”.
If anything, this reinforces my belief that KYBER is a good design. If this is the best he can come up with to try and discredit it, then it must be pretty solid.
The last part I agree with - clearly KYBER isn't trivially broken if this is the best he can come up with.
What doesn't seem clear to me, and I'd appreciate if you could tell me why you think differently, is that KYBER-512 isn't as strong as it was targeted to be. I find djb's argument on this narrow point fairly convincing: KYBER-512 isn't as secure as AES-128 (by the methods used to measure "secure" in this competition).
Given that I already generally use AES-256, why shouldn't I treat this the same way as AES-128?
That is, "it's probably fine-ish, but if you have the power, just go one bigger".
It’s possible that in the specific sense that NIST defined, KYBER-512 isn’t as strong as AES-128. However, that doesn’t mean that it’s less secure in general. E.g. DJB himself wrote a good article[1] on how even though 128-bit AES and 256-bit elliptic curve crypto are thought of as same “security level”, actually there are attacks against AES that just don’t apply to ECC when you consider multi-target security models (i.e., when you consider a population of users not just one). I wouldn’t be surprised if similar things applied to lattice-based crypto, but I don’t know enough about it. And even if we take the reduced security level given by DJB, it still seems big enough to be out of reach to any realistic attack.
But by all means feel free to go one bigger and pick KYBER-768, and I believe lots of people do recommend this. Obviously, there is a performance penalty (as there is when moving from AES 128 to 256), and for PQ schemes there is also more importantly also a big increase in the size of bytes on the wire when public keys have to be exchanged (e.g. in TLS) - in this case a jump from 800 bytes to 1,184 bytes (a 48% increase). (Compare this to ECC public keys which are typically around 32-65 bytes, depending on encoding).
First off, thanks for the reply. It has since been pointed out to me elsewhere that there are now responses showing his central claim of a maths error to be false, which means all of this is now moot - KYBER is as secure as claimed.
It has also been pointed out to me that djb has been quietly ignoring another metric in which KYBER beats NTRU: implementation complexity.
Even accepting all other arguments about the tradeoffs between NTRU and KYBER (and I do take your point about size of keys being more important than CPU cycles), even then, KYBER is judged to have lower implementation complexity.
Having read about all the crypto libraries who produced broken output because they made a mistake in the implementation, that's something I immediately understand as a big benefit.
Again, thanks for the conversation and helping me understand!
There's a valid point to be made about selecting key exchange parameters to match bulk encryption parameters, but before you gear up to make a stink about it, bear in mind that it's generally the case in modern cryptosystems (that aren't specifically designed to do that matching) that key exchange security levels are lower than those of block ciphers. The step functions for key exchange security levels are pretty abrupt, and you pay a pretty high price to select the next one up, so aiming for "roughly the vicinity" of 128 bits is pretty normal.
The gist as I understand it is: NIST tilted the playing field repeatedly so that their favorite would win, and their favorite is not the best new candidate and not even better than existing systems.
Re: style, this seems longer and more rambling than usual, but other stuff on his blog has been long, and his style with lots of background, asides, references, self-quotes seems pretty distinctive, isn't it?
But I'm sure you paid more attention to this than me.
Not exactly sure if explicitly declared output of the Kagi Universal Summarizer is allowed (will delete again if not, but I did not see a guideline for it), but I think this could be a start sparking further curiosity. (I don't know how accurate the output is, as I am not a domain expert in PQC or cryptography in general, for that matter)
Kagi Universal Summarizer output for "Summary":
This web page discusses the selection of the Kyber and NTRU cryptosystems as the quantum-resistant digital signature algorithms by the National Institute of Standards and Technology (NIST). It analyzes NIST's claims about the security levels of Kyber-512 compared to AES-128. While NIST argued Kyber-512's security level is boosted enough by memory access costs to meet the AES-128 threshold, the text raises uncertainties around accurately modeling such costs and argues NTRU may have advantages in flexibility and performance. Overall, the page questions whether NIST fully justified selecting Kyber-512 over NTRU given the uncertainties in quantifying the security of lattice-based cryptosystems against future attacks.
Kagi Universal Summarizer output for "Key moments":
- There is debate around whether Kyber-512 provides adequate security compared to the AES-128 benchmark. NIST claims it meets this level factoring in memory access costs, but others argue the analysis is uncertain.
- NIST's analysis added 40 bits of estimated security to Kyber-512's post-quantum security level due to memory costs, bringing it above the AES-128 threshold. Critics question this calculation.
- NTRU provides greater flexibility than Kyber in supporting a wider range of security levels. At some levels it also has better performance and security than Kyber options.
- The security of lattice-based cryptosystems like Kyber and NTRU is not fully understood, and there is a risk of better attacks being discovered in the future.
- Standardizing a system like Kyber-512 that may have limited security margin could be reckless given lattice cryptanalysis uncertainties.
- Critics argue NIST has not clearly explained its security evaluations and claims about Kyber-512's margin above AES-128.
- Memory access costs are important to lattice security but are not fully quantified in their impact on Kyber versus classical attacks on AES.
- Removing Kyber-512 could make NTRU the strongest candidate given its flexibility at multiple security levels.
- One paper argued multi-ciphertext attacks on Kyber may be as difficult as single-ciphertext attacks.
- There are calls for NIST to be transparent about its analysis and decision making regarding Kyber-512.
I don’t think this contributes to the conversation. There is clearly social context to this situation and copy pasting a machine-generated summary is no more helpful than reading the article at face value.
my thinking was that someone with domain expertise could identify if the summary and key takeaways make sense and furthermore if the accusations have merit.
anyways, it seems i cannot delete the comment, so would be great if a moderator or something could do it, thanks.
Whenever the topic of DJB vs NIST comes up, there are always people saying "this may look petty, but he has a spotless track record, so we have to trust him".
I want to push back on this a little by linking this Twitter thread:
I did not dig through all the links in that twitter thread, but the first few tweets are pretty misleading.
The tweets say DJB implied that scientists who submitted algorithms were bribed by the NSA. That's a complete misunderstanding of that DJB wrote: he argued that the NSA wouldn't need to bribe those scientists, because they hired the top experts in the field years ago, so it might be the case that they're so far ahead of what's being submitted that all they have to do is push NIST to pick an algorithm they know how to break.
Now, I have no knowledge of any of this, so I have no idea if DJB's argument is insanely paranoid like the author of the thread implies (with the GIF in the 3rd tweet). All I can see is that the author's claim is a gross mischaracterization of what DJB wrote.
It isn’t paranoia when the NSA has been caught with their hand in the cookie jar before. There is no obligation to give known liars the benefit of the doubt.
Speaking as a cryptographer, a healthy amount of paranoia is good and necessary. DJB's level of paranoia re: Kyber and lattices is bordering on delusion.
Cryptography: <reference> is incorrect because <logic> and should therefore be <statement>. This allows a ciphertext produced by X to be breakable in 2²¹ operations when 2¹¹ messages are known to the attacker.
Paranoia: NSA bribes the independent reviewers and is backdooring the whole thing because everyone knows the military and intelligence services are two decades ahead of public research and we just don't know what the algorithm flaw is yet, but I'm telling you, they're keeping something behind!
I am not saying djb (or anyone) does the latter, example is given exaggerated for illustrative purposes only. The cryptographer's example is also exaggerated, as it does matter how algorithms are chosen and there's a measure of subjectivity involved. Still, I would not say that paranoia is the job of a cryptographer.
There is so much to unpack in that thread and its references. A lot of he said she said.
For example, one reference being used as evidence that djb is evil complains about being insulted that their employer (also djb's employer, presumed to be on djb's side) suggested seeing a company doctor after being on sick leave for a while. This is 100% standard practice in the Netherlands and the doctor is independent, not from the company themselves, and keeps things confidential. It's how we resolve the conflict where you can't just claim you're sick for unspecified reasons indefinitely and continue to expect money, but the employer isn't entitled to know your medical dossier either. This lets you have medical confidentiality and long-term sick leave where the employer can trust that appropriate action is being taken because they trust the impartial doctor to verify that. This is brought up as part of the conflict between djb, the author, and the university they work for. This isn't the only thing they allude to not knowing about while abuse was alleged to be allowed to happen by djb and others. I believe most of what is written, but at the same time, the problem is clearly being exacerbated by not using coworkers, friends, or even google/ddg to find out what legal system you've moved into. Djb even suggested they should take legal action, and HR offered arbitration, but the person declined both. So now the evidence amounts to their word on a blog and the alleged perpetrators faced zero consequences.
As much as such references serve to convince me of djb=evil, they also convince me there may be more to the story than one side.
Obviously I've just highlighted one thing here, there's a lot more he-said-she-said going on elsewhere in the threads that could give one pause in believing one side verbatim, even if they're likely right in spirit
I do believe he has increasingly argued in bad faith and alienated his peers to the point that they're (we're) unwilling to engage with him, which from the outside can look like his points are unrefutable.
Hm. Yeah, I really need to adjust my view on this - I found NIST's responses dodgy precisely because they seemed so unwilling to engage, and I still thought of him as respected enough to warrant better responses.
If he's turned so crank-y that his peers simply no longer engage with him beyond the strictly necessary, then this all looks a bit different.
I'd still love to see some of his specific criticisms addressed, but that becomes a minor point...
It actually doesn't, because NIST needs to speak to and be trusted by more than just a tiny clique. Djb crypto is widely known and used which means they have to deal with him whether they like it or not.
It does for me, because his criticism (since shown to be wrong) never included the claim that KYBER was broken, just that it wasn't perfect and that he was unfairly treated.
Cranks and assholes occasionally are unfairly treated, but generally are fairly ignored - the effort of dealing with their claims aren't worth it.
As an outsider, "respected cryptographer makes a narrow technical claim and is brushed off by NIST" and "sore loser that no-one talks to complains that people aren't entertaining his latest complaint about the ref" are very different situations, from which I will take very different actions.
You're making decisions based on trivial social factors instead of the only salient point, which is that NIST had proven itself to be corrupted in the past when it comes to setting cryptographic standards.
I'm a bit worried that we'll adopted an algorithm widely, only to be told “gotcha! here's how to break it”. After all, it would be a great way to prove once and for all that an algorithm selection process can be subverted.
> which from the outside can look like his points are unrefutable.
Just to be sure, this is not how I see it at all. I'm relatively convinced that the thing isn't backdoored rather than buying into this particular conspiracy theory.
But yeah, in general that is a risk when one doesn't engage despite disagreeing
Strong agree. I've heard Bernstein described before now has having "all the subtlety of The Incredible Hulk". Quite possibly there's some things he can get away with only because he's a brilliant cryptographer.
Designing curve25519 was, in terms of practical impact, an achievement I'd put in the same category as inventing RSA or Diffie-Hellman. Not because the ideas were new, but because they came together in a way that produces something that "just works" in practice, and you don't have to worry about invalid curve points and twist attacks and accidentally using the addition formula for a point doubling and many other things. The idea that instead of a framework where you can plug in your own parameter choices and some of them might be secure, you can just build a crypto library that does one thing well, was certainly new enough that no-one else seemed to be doing it at the time. The fact that when I need a key for real, most of the time I do `ssh-keygen -t ed25519` or the equivalent in other systems speaks for itself.
As does the fact that github has deprecated the ssh-dss key type and recommends ed25519 and the default: in the contest between Ed25519 and DSA/ECDSA for digital signatures, Bernstein wins hands down and NIST has egg on their face. Although I have no proof of malice, I haven't yet heard a rational explanation for just how badly ECDSA mangles the Schnorr protocol in exactly the way that means a lot of implementations end up with horrible security holes.
And then there's the Snowden leaks and DUAL_EC. "The NSA has interfered with crypto standards in the past, reliable leaks show it was part of their mission statement, and they could be doing so again." is to me a statement backed up by plausible evidence that's very far from the usual conspiracy theories. This is not faked-moon-landings territory.
And I should also say, there are a lot of ways of being evil that to my knowledge no-one has ever accused Bernstein of: as far as I know, he's never been accused of raping or sexually assaulting anyone, nor has he said anything particularly racist or pushed any far-right ideology. He has been accused of insulting and occasionally threatening people who disagree with him on technical matters, but he's not what we usually mean by "bad/evil person, avoid if possible".
I'd say he has a fairly spotless track record in cryptographic protocol design, and a fairly stained one in interacting with other humans. When he's pushing back against design decisions that actually are stupid/evil, that's an asset; in lots of other cases it's not.
> I haven't yet heard a rational explanation for just how badly ECDSA mangles the Schnorr protocol in exactly the way that means a lot of implementations end up with horrible security holes.
I thought schnorr was under patent for a bit, so an open alternative was needed? Also ECDSA does allow for recovery of the public key from the signature, which can be useful.
Regrettably, while we are discussing the author's motives, we may inadvertently overlook the miscalculation by NIST. The crux of NIST's primary error lies in improperly multiplying two costs when they should have been added.
If this assertion holds true, it would be prudent to at least revisit and revise their draft !
Ironically, the style and substance of DJB's engagement with his peers and with NIST is likely to sour both against his claims[0], credible though they(might) be. DJB's impression of NIST "stonewalling" could very well be their reluctance in engaging with an adversarial and increasingly deranged private citizen.
> We disagree with his analysis,” says Dustin Moody at NIST. “It’s a question for which there isn’t scientific certainty and intelligent people can have different views. We respect Dan’s opinion, but don’t agree with what he says.
That's great for a PopSci article, but I(and many others, I'm sure) would like to see the details of this analysis hashed out. DJB had his chance at making this happen, and blew it. However, that doesn't mean his questions[0] should go unanswered.
[0]: specifically talking about the calculation of the Kyber-512 security level here. Not his more conspiratorial claims.
> It’s a question for which there isn’t scientific certainty and intelligent people can have different views.
WTF is that? No, it's not a question where intelligent people can have different views. DJB is literally claiming the NSA is claiming something similar to "3 + 3 = 9". That claim is either correct or not.
There's a paywall in front of the article that I have no intention to deal with after seeing what's on the comments. But a phrase like (and I could find it before the paywall rits) this is absolutely dishonest.
For a subject as mathematical as encryption, the debate is surpisingly... subjective. Are we seriously at the mercy of non open-source geniuses for this? Was RSA this much unprovable objectively in its security?
You don't need to weak cryptography to work towards that end with passkeys.
For key pairs that can be transferred, it's no different than the threat of password managers stealing your keys to the castle. You just have to trust the cloud and your entire hardware and software stack your password and passkey manager run on, and/or that nothing in that stack gets swapped out without you noticing.
Similarly, I've yet to see a hardware security key that you can verify doesn't have backdoors compiled or soldered in. All the cryptography in the world doesn't matter if a government/whoever can just pop your Yubikey in a machine that does whatever magic incantation is needed to dump your keys. That magic incantation can even be secured with strong cryptography to ensure only the US government/whoever can use it.
And of course if you're truly paranoid you can get a FPGA and implement a hardware security key on that. The overall security posture would likely be weaker, but you could be confident, hopefully, that nobody has put some kind of backdoor into the hardware you designed yourself to run atop a generic array of logic gates.
You'd have to choose your FPGA judiciously. The truly paranoid see the attack surface area of 100 GB of black box libraries needed to synthesize designs and appreciate the possibility of gateware trojans.
While I think choosing an open source stack is important, can you imagine how difficult it would be to inject a remote exploit into ARBITRARY hardware your code didn't know in advance?
Nice, last time I looked the Solo Hacker Edition was completely out of stock.
Looks like the Solo HE lets you load your own firmware on to it, but it doesn't let you load your own signing or encryption key to ensure firmware updates are trusted. Apparently the Solo HE can be flashed once permanently by overwriting the bootloader, though.
The non-HE versions of the Solo 1 and 2 will load new firmware signed by Solokey.
Earlier this year I looked into this and remember finding out that it was either the Solo 1/2, Somu or one of the Nitrokey products that, while shipping with a secure element, didn't actually use the secure element.
Not even that, I just assume that in a few years, anyone who has a few thousand dollars to hand to Cellebrite will have access to devices that can crack security keys, just like they can give Cellebrite some money today for devices that can hack phones, tablets and computers nearly instantly.
The applet source code I linked above can be configured to use your PIN (not stored on the device) as the keying material to AES256-encrypt all the credentials stored on the trusted element. The PIN may be up to 63 bytes long, and the derivative used for keying is 128 bits.
If you think some company in the future will have the ability to somehow "steal" the contents of the device's flash, you'd still have to climb the mountain of explaining how they could then break the encryption the open-source software already - before they got hold of the key - applied to the flash contents.
Just to make sure this is clear, the security key at rest is not storing your credentials. It is storing AES256(key=<PIN>, value=<credential>). It is not storing the PIN.
You only need to trust the hardware to implement encryption correctly, which you can - of course - verify yourself. It's not realistic to say that the pre-encrypted values might be secretly stored somewhere else: there just isn't enough space on the device to do that.
No passkeys are based on whatever the best crypto is -- for example in a world where quantum computers are practical then they'll be using a quantum safe algorithm.
Spy agencies, law enforcement, and criminals would all much prefer people use easily guessable and/or unsafely stored passwords. Those are both easier to discover, easier to brute force, and easier to intercept (specifically all you need to do is intercept a password once as it's definitionally not based on a single time exchange).
Or to grab your phone with a 4 digit pin and unlock every device you've secured with a passkey. At least previously your bad password was probably longer than a 4 digit pin.
The open-source community will continue adopting "next-gen" "encryption" even though it has back doors, just like they didn't question elliptic curve encryption even after the NSA got caught putting out a compromised algorithm.
> The open-source community will continue adopting "next-gen" "encryption" even though it has back doors, just like they didn't question elliptic curve encryption even after the NSA got caught putting out a compromised algorithm.
The NSA put out a known compromised elliptic curve encryption algorithm? Or are you referring to Dual_EC_DRBG, a probably compromised random number generator?
We should have 4 standards bodies in 4 non-allied nations each define an encryption standard and apply all 4 on top of each other in a bolstering fashion for all encrypted traffic
I’m not qualified enough to say who’s right or wrong but I’ve noticed a fair amount of comments invalidating the claims because of the author’s motives, ie. He’s a sore looser and has a big ego. I don’t see why that invalidates his claim that the algorithm is weaker than claimed. “Jerks” can be right too.
People are people and they come and go, but these NIST standards stay a long time.
Such a sad time for science. We really seem to be entering the new dark ages. There is no room for curiosity or intellect anymore. The institutions cannot be trusted and people rightly do not trust them.
The dark ages are called dark, partly because little history was recorded. That period saw more technological innovation than the well documented imperial period before it which was technologically stagnant in comparison. We live in the best documented period in history, but our technology could stagnate independently of that, or even due to it.
Because your options are Beijing or DC. We like to pretend the age of empires is past but realistically we still live in a time of where there are two major world powers and everyone gets to decide which side of the line they want to fall on, accept American hegemony or submit to Chinese control.
However, with cryptography historically the best crypto has been invented in the US and it made much more sense for allies to just use ready made solutions than to roll their own. Do countries on the US crypto exports ban lists have their own incompatible crypto? I dont know, they might, but they for sure don't share it as freely available standards.
The Enigma machine was probably the weakest of its kind of machines. It had a critical security flaw in that it could never encipher a letter to itself, and most of its security came from the IV settings, which were communicated in such poor fashion that the Poles cracked it long before WW2 even started.
By contrast, the US and British rotor machines were never cracked by the Axis powers in WW2, and the other two German rotor ciphers were less thoroughly cracked by the Allies.
According to Wikipedia, rotor cyphers were probably invented in the Netherlands, not the US. It says an american tried to commercialize the idea, but went bust. The only patents they mention were issued to Europeans.
They don't seem to mention these spiffy US and UK rotor cyphers; any pointers?
There is plenty of non-backdoored cryptography to use. It’s just the US forces various entities to use their backdoored crypto in order to get grants and government contracts and regulatory approvals.
For example, Satoshi Nakamoto chose a very interesting, lightly used elliptic curve secp256k1 for Bitcoin because for various reasons he was very confident it wasn’t backdoored, and obviously it has stood the test of time https://en.bitcoin.it/wiki/Secp256k1
True. But it's important to note that it is likely that, given the requirements of creating large, functioning systems of administration, it might be true that all governing systems are trying to solve the same types of problems.
And it's worth remembering that, even in the US there exist examples of people who pushed for real change to these systems, and ended up detained, dead or simply disappeared.
It might also be worthwhile to consider the following well known allegations: revolving door appointments between bureaucracy and companies that it interfaces with and regulates; compromise of politicians by various interest groups using bribery or blackmail; the effects of corporate lobbying and donations to distort the legislative and electoral processes; the powers of unelected career bureaucrats whose tenures often outlast elected officials.
It's important to recall that the US is not alone is allegedly suffering from these maladies, but that it might suffer from them ought be a cause for concern.
Such considerations seem to raise the question of whether the president (and associated democratic and elected apparatuses) are in some way figureheads designed to provide a focus and an outlet, but may also function to protect the overall system against substantial reforms.
Such a description may help explain why processes in democracies often stagnate.
The is not to say that the Chinese system is "better" in any sense (our first consideration indicates that perhaps all large governing systems must essentially be the same), but it might be more honest about what it is.
This gives rise to the sense that democracy, at least in part, may be a deception that usefully provides people the illusion of a voice for change, while at the same time protecting the governing system by ensuring people do not seek more disruptive methods to alter it.
Of course, the flipside is, without such less disruptive avenues, people are then without peaceful recourse. This means that in the Chinese system, the government knows it faces the threat of revolution, which, while it could be thought to function as a kind of guidance of, or check-and-balance on, its power, is also responded to by the system itself becoming more draconian and authoritarian, in order to protect itself, if that makes sense?
So in the US, the system protects itself with what may be described as a generally more peaceful and free society, with what be a somewhat deceptive democracy (of course more authoritarian exceptions are occasionally made to neutralize true rabble rousers), whereas in China, the system protects itself more overtly, with a generally more authoritarian and restrictive society, which is more open about what it is.
The point of this comparison is to foster more mutual understanding and less false-differences that fuel unnecessary divisions and distort discussions with fallacies, in an attempt to try to guide discussions along more productive directions, based on realities.
>This gives rise to the sense that democracy, at least in part, may be a deception that usefully provides people the illusion of a voice for change, while at the same time protecting the governing system by ensuring people do not seek more disruptive methods to alter it.
As Churchill said, "democracy is the worst form of government, except all others". Are there elements in democracies that entrench the status quo? Of course, especially in countries like the UK and the US with single member district, first past the post systems that favor huge parties that never change. Here we have a pretty complex party list based system that is often critiqued as "too difficult to understand for the average voter", but when people want to change who rules the country they have more than 1 viable choice of opposition. You may say, nah, it is not about the system of voting, duopoly is a feature of "mature" democracies that have been around for 400 years. To which I'll give an example of Poland where the lower chamber of parliament (Sejm) is voted for party lists proportionally and it always has a mix of many parties, and the upper chamber (Senat) that uses the classic "first past the post" system and in the senate 2 major parties(the biggest party and the current biggest opposition party) always get 95%+ of the seats.
I'm not an electoral mechanics geek but I appreciate the details, even if much of it flies over my head. Do you perchance have any resources where I could better understand such things?
Also, I think your points are very interesting but may benefit from some paragraphing for added clarity and coherence.
Another point of note is that, while interesting, it may be the case that no matter how much a government optimizes its electoral systems, it may still be subject to the other vulnerabilities listed above, which sadly might subsume any gains from such optimization.
Half seriously, what's the difference between your career ruined via social credit being wiped by government or your career ruined via social credit being wiped on Twitter?
> or your career ruined via social credit being wiped on Twitter?
Whose career got ruined actually? Johnny Depp is still making movies, Rammstein just announced a 2024 tour, JKR still is making millions upon millions every year with Harry Potter, Trump is likely to be the Republican candidate in 2024. "Cancel culture" isn't real.
Just because you are not on Twitter (or Facebook, or whatever) won't stop other people from tweeting lies about you. And the public image that you have on social media matters a lot in this society. See also: "Cancel culture".
What kind of mental contortions do you have to go through to think like this? It comes off like gung-ho ignorance at best, and reeks of some of the most obvious american military propaganda I've ever read.
NIST refereed a competition among the best-regarded academic cryptographers in the world. It didn't design any of these constructions, and practically all of the inputs into the competition, including the critiques of the submissions, came from academics (many of them not American).
One of the annoying things about how Bernstein is communicating about this is that he is counting on his audience not knowing this.
This is all nice and well, but it doesn't actually address the inherent untrustworthiness of a state-associated (and known to be penetrated) organization.
The entire point of the competition is for the state to select a state standard, so "state-associated" is not the smoking gun people suggest it is. If you don't care about state standards, hey, I'm right there with you. But: they obviously exist.
The EU, UK and Australia are all bad for this in various ways, having key-disclosure laws or trying to ban e2e or whatever else. I don’t know about you but I don’t consider China or Russia to be valid places to look for un-backdoored crypto either.
It seems (to this non-American) like one of the least-worst options. Maybe we could trust a Scandinavian country or Switzerland?
(Yes, I have missed out huge swathes of the world, which I mostly know little about…)
Yeah, As someone who grew up in one Scandinavian country and currently lives in another, I wouldn't trust a Scandinavian country to not roll over and be a US lapdog when push comes to shove.
I don't know, but I'd start with getting buy-in from any interested country with an emphasis on not too much from any one. Where it's physically located is irrelevant, the issue is the clear commitment to the interests of the american state.
NIST promotes open competitions with many non-US based participants, e.g. AES was designed by Belgian researchers.
These is somewhat better than not having such a system at all.
If nothing else, you get to benefit from public analysis and pick another of the algorithms that get proposed in the competition, even if it's not the NIST-blessed one.
The issue is not AES of course, it's the NIST being associated with a government that can't be trusted when it comes to developing reliable and trustworthy encryption schemes. They've clearly demonstrated this time and time again.
That's not fair. NIST has a documented history of working with the NSA, intentionally hiding that interaction, and the outcome was a major security problem. So it seems DJB believes that NIST has not provided sufficient documentation, and given their history it's reasonable to take a position that if they don't do that, the outcome cannot be trusted.
So the issue is that once NIST did that the irrevocably destroy any basis to say "you can trust us". Hence they cannot have anything that they cannot document the source for. e.g. any non-trivial numbers have to have a documented generation path, and any non-trivial numbers in those steps also have to have a documented generation path.
Let's assume for now NIST is being 100% honest, and there's no attempted subterfuge. How can we distinguish that from them intentionally hiding subterfuge? Even though in this hypothetical where we're assuming no corruption, we have no way to know unless we have everything.
It's not reasonable for anyone at this point to have an algorithm that has any unexplained numbers, but for an organization that is again documented as having sabotaged standards it's absurd.
This is a comment that only makes sense if you believe NIST designed CRYSTALS-Kyber, or had a significant hand in its design. But nothing of the sort happened. The CRYSTALS team is overwhelmingly academic and overwhelmingly European. It's frustrating that Bernstein has communicated about this without making that clear, because it's obvious that lots of people believe NIST went off in a room and came up with a scheme, and your first responsibility in discussing this honestly is to dispel that misconception.
I don't think the problem is that kyber was designed weak. the fear is that the NSA/NIST saw an algorithm that was weaker than it should be and worked nice and hard to make sure it became the standard. the worry isn't a back door, it's unintentional mistakes that are being capitalized on.
Applying this logic, there is literally nothing NIST could have done here other than not run the competition in the first place; if it's not enough that almost every participant in the competition agrees that it was well conducted --- if the consensus of the whole academic field of post-quantum cryptography doesn't count for anything --- then all you're really saying is that there's no way to create a trustworthy standard.
And, to that, I say: hell yes. Cryptographic standards are, in my view, a force for evil. Nobody uses Blake2 because it's "standardized"; they use it because there's a rough consensus of credible, credentialed cryptographers that it's strong, and it's obviously fast. We can do Internet cryptography without entities like NIST and the IETF.
This is something Bernstein believes as well; he's said it before (I have my view on standards in part because of his own). But he's not being honest about that here: had his submission won, he'd have endorsed the competition, even though the exact same logic holds --- he should, by this logic, trust his own submission less if it wins.
But I also want to be especially candid here: this "NIST is acting as the catspaw of NSA by selecting the weakest standard" argument is also the standard rhetorical fallback for arguers who have just this instant learned that NIST didn't itself design CRYSTALS-Kyber, but don't want to surrender their credibility by admitting it.
At least for myself, I disagree that NIST could only win by not running the competition.
The impression I have right now is that they made some mistakes and in response to having those pointed out went "well, that's just like, your opinion, man" instead of explaining why it's not a mistake, or fixing it, or something.
> if the consensus of the whole academic field of post-quantum cryptography doesn't count for anything
This is precisely not what I'm saying (and isn't what's happening here). What I'm saying is that given the evidence presented about NIST repeatedly changing evaluation methods, incorrectly calculating the strength of the Kyber, and refusing to clarify any of the above, it really looks like NIST had an outcome that they wanted to reach and took the actions necessary to reach that outcome. It's also really weird to say "it's not enough that almost every participant in the competition agrees that it was well conducted" in response to the lead designers of one of the two primary algorithms saying that the competition wasn't well conducted. With a competition like this, you do expect some people to think that the decision criteria weren't quite the right ones or something like that, but here there's a very clear accusation that NIST either just lied about the security of Kyber or messed it up and refused to correct it when it was pointed out to them.
It is very likely the NSA has an good understanding of the weaknesses of various algorithms in the face of their technology. After all that's their job. The comment above also makes sense when the NIST is advised by the NSA to influence their processes as to choose algorithms to their liking, no design by NIST needed.
The "major security problem" the previous commenter is referring to is an NSA design, not a competition winner. I think what's happened here is that you've read the preceding comment too generously, in particular by skipping big chunks of it.
I did not skip any parts, previous commenter is generalizing from this major security problem to a non-trustworthy NIST in general, and one power NIST has is to choose algorithms as standards which are known (to them) to be weak. While this is a discussion on probability and trustworthiness, where you can reasonably take stand on both sides, the argument itself is sound.
Yeah, you did. "any non-trivial numbers have to have a documented generation path, and any non-trivial numbers in those steps also have to have a documented generation path". This doesn't make any sense as a concern if you understand what Kyber is.
Here you're making a claim even Bernstein isn't making; of course, he couldn't, because he is openly advocating for NIST to endorse NTRUprime, which would leave open the question of how much NIST paid him to co-submit NTRUprime.
It doesn't actually matter. NIST should not be involved at all. Jesus this is basic, if all they're doing is refereeing a competition there's really no excuse for them to be involved at all.
I think there's definitely probably some motivation in that, but I don't think it captures the whole issue.
As in: I'm sure he probably personally feels emotional about that aspect of it, but the fact that he may have a personal emotional motivation does not make untrue any of the points he may be raising.
But no disparagement to your point, I mean this sincerely: it is good work on that ad hominem if your goal is to dismiss the concerns. Because that "ulterior motivation" ad hominem is a fairly effective way to dismiss it, unless you want to look a little deeper to the real issues. Well done! I mean that, seriously.
From outside it could be hard to tell what's true, so it's worthwhile to consider all the options. He's essentially trying to make allegations about a "black box" process (black in the sense that the NSA is not publishing its calculus and activities on these matters), so who do we trust? It seems it could be unclear right now.
CRYSTALS-Kyber is anything but a black box. It's an academic research project, about which a metric fuckload of rationale, critique, and rebuttal has been published openly.
Sorry, I should have been more clear; I meant the NSA process around these systems is a black box, because they are not publishing their internal calculus and activities around the development of these cryptosystems.
For instance, "showing your work" around the development of SHA0, or showing your internal research and attacks for CK.
This is a question I've asked before, but I'm wondering if there's an updated perspective. Given the human brain is unencryptable, it seems like keeping secrets is going to be impossible.
I guess maybe with the advent of AI, you just have device to device communication with no man in the middle in the future?
There's our problem - right there!
If a body as important as NIST is not so utterly transparent that any random interested person cannot comb through every meeting, memo, and coffee break conversation then it needs disbanding and replacing with something that properly serves the public.
We have bastardised technology to create a world of panopticonic surveillance, and then misused it by scrutinising the private lives of simple citizens.
This is arse-backwards. If monitoring and auditing technology has any legitimate use the only people who should morally (though willingly) give-up some of their privacy are those that serve in public, in our parliaments, councils, congress, government agencies and standards bodies.
> All we can do is tell people
No. You can prove it, and if you cannot, step aside for leadership better suited to serving the public interest.