Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Linux 5.13 Reverts and Fixes the Problematic University of Minnesota Patches (phoronix.com)
262 points by varbhat on May 21, 2021 | hide | past | favorite | 111 comments


What is really sad, is that this could be a good pen-test for the kernel. Especially the idea of introducing bugs that are only vulnerabilities when they all come in together.

If only they had contacted the Linux Foundation ahead of time to get permission, and set up terms, like a real pen-test. Then work could be done on detecting, and preventing these sorts of attacks, maybe resulting in a system that could help everyone. At the very least a database for security researchers that are registered, but unknown to maintainers, where they could store hashes of bad commits. Then a check if any of those commits made it through. I know the kernel makes use of rebasing, so that might not be the best approach technically, but something like that. To ease the pain of wasting developers time, sponsors could put up money that the maintainer gets if they catch it, and maybe a smaller amount if they have to revert it.

EDIT: if the Linux Foundation said no, they could have tried another large open source project with a governing body, Apache, Python, Postgres, Firefox, etc. It wouldn't have been as flashy and high profile, but it would have still been the same research, and odds are you would find at least one project willing to participate.


> that this could be a good pen-test for the kernel

Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless.

It's like pointing out that buildings can be robbed by breaking their windows. No shit. What do you want to do about it?

[1] https://twitter.com/SarahJamieLewis/status/13848800341465743...


Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless

Fair enough. The kernel maintainers are probably much more aware of this than the average open source project. Maybe for some projects it would change their mindset from knowing that this could be happening, to knowing that this will be happening.

Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.

Edit: It would be about as useful as pen testing that includes social engineering. That is to say, everyone knows there are dishonest people, but they may not be aware of some of the techniques they use.


> Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.

It did do that, at least twenty years ago. Static analysis tooling is a huge, active area of research and the kernel is frequently a target of that research. Ditto for other areas like language development (see the recent work on getting Rust into the kernel). If these students had tried making real contributions to those areas, I'm sure they would have been welcome. But that kind of work is difficult and requires real research and development, which these students aren't interested in and/or capable of. So we got this trash instead, and now hopefully you understand the harsh reaction to it.


To be clear, I 100% understand the harsh reaction. If anything I think they were lucky no criminal charges were pressed.


Yeah. I'm not trying to come down hard on you or anything, I just feel like a common reaction to this research is, "ethics aside, didn't they point out a real vulnerability?" And I want to make it crystal clear that, no, they didn't. Their research was entirely without value.


If you put ethics aside, then yes there was value. Failed research is most valuable, while successful research is usually quite worthless due to the bias toward finding whatever researchers already believe.

I do wonder if they would have published if they didn't expect disclosure by angry Linux maintainers, i.e. if they really believe they were weakly successful. Generally, I think this is the type of finding that normally gets lost if there's no pre-disclosure.

https://pubmed.ncbi.nlm.nih.gov/16060722/


> If you put ethics aside, then yes there was value. Failed research is most valuable, while successful research is usually quite worthless due to the bias toward finding whatever researchers already believe.

True as far as it goes, but the most valuable failing research is that which fails to produce an expected positive result.

Next most valuable is failing to corroborate a novel hypothesis (which is probably what you meant).

When you get an expected result, you haven't learned much, regardless of whether you were (or should have been) expecting success OR failure (with the exception being getting more, or more accurate, data that narrows error bars).


The ideas in the paper were novel, perhaps (didn't check) but there was far more to learn from looking at reviews where bugs did slip by than doing their experiment. You could probably calculate the bug acceptance rate by category of bug, making a few random data points does not help science here


I think the prevailing belief, and belief they must have had to try their research was the belief that a good percentage of attempts would slip through initial review to be caught at a later stage. I fail to see what your alternative research would do to that apparently false belief other than reinforce it.


Agreed. As I mentioned in an earlier thread on this scandal, [0] we already know that security bugs can make their way into the kernel.

[0] https://news.ycombinator.com/item?id=26888129


Check out my groundbreaking research. I smashed windows on 20 buildings and took cash out of their registers. In order to fix this vulnerability, I suggest you make everyone who passes by your building sign this piece of paper saying they won't smash your windows and take your money. I will happily receive your nearest Nobel prize now, thank you.

Signed, UMN Researchers.

Edit: Wait, the cops are here. We sincerely apologize for any harm our research group did to your business. Our goal was to identify issues with the windows on your buildings and we are very sorry that the method used in the “smashing windows to take cash” paper was inappropriate.


Related: there's a reason the Certified Ethical Hacker course places such emphasis on getting written permission before doing anything.

If you're messing with someone's systems, or (as in this case) with someone's processes, you don't get to claim to be the good guy unless they agreed to it before the fact. It's not rocket science.


> I smashed windows on 20 buildings and took cash out of their registers.

More like they posted on your facebook pro-<insert your kink here> messages. There's some value in grounding the analogy in reality.


I think the cash theft is analogous to the dev time the researchers wasted.


What exactly is the value? How did you come to it? The pay rate of people who would be working on the software regardless of the MNU boondoggle, is not wasted anymore than any other day is wasted. This is part of development. Dead ends, circling back, accounting, maintenance, et al.

> I think the cash theft is analogous to the dev time the researchers wasted.

That's a fantasy that developers would like to believe because they put an inappropriate valuation on the the time spent on software. The inability to face this, has been disturbing from the start.


This. and more so, they should be charged for the broken windows. dev hours times 80.


How else were they supposed to get that much exposure for their tier 2 CS program?


smash some windows in their own university


<s> Bother the freedesktop project? </s>


The stated motivations of the researchers make it seem like they are not interested in advancing the security of open source software, but rather undermining it.


Then "on behalf of whom" should be the next question.


Maybe it is more productive to just do the opposite. When vulnerabilities are found investigate the committer(s) for possible links, and quality of previous work


Why would the Linux Foundation get to decide on if those researchers are allowed to experiment on and waste the time of volunteer developers?


I think it depends on how you frame it. If the Linux Foundation thinks this kind of research would generate useful information for the kernel project, then the developers' time wouldn't be wasted, just used in a different, yet productive, way. I concede that this is not an easy question, because the developers may have different opinions about the usefulness of this exercise, but at the end of the day, maintainers can run their projects how they see fit.


The Linux foundation is in a position to ensure that Linux users don’t accidentally get experimented on when the PRs are approved.


Why would the Linux Foundation get to decide on if those researchers are allowed to experiment on and waste the time of volunteer developers?

Granted, In the case of linux, it might make more sense if it were the combination of the Linux Foundation & Linus. It's their project, they can subject their volunteers to any tests they want. It may drive away some volunteers, but wether to take that risk or not, is up to the project to decide. For something as big as the kernel they may decide to get permission from the individual maintainers, maybe even limit the research to only the subsystems that agree to participate.

In any case, the point I'm trying to make, is that this kind of testing may be beneficial, if the project is aware of it, and agrees to it. Who gets to decide on behalf of the project, would depend on the hierarchy of each project.


There are relatively few volunteers working on Linux. They mostly do it for their day jobs. Though the same criticism applies with respect to wasting time at their various companies. But, yes, there may have been some way of conducting a useful test after receiving permission--probably on a limited subsystem or set of subsystems.


Presumably they would do so after consultation with kernel developers, like uhh... themselves.


Just to clarify, all the proposals that were intentionally vulnerable and were really vulnerabilities were not accepted in the first place. However, that event triggered review of all University of Minnesota proposals, and that's what's being discussed here.


"ALL the proposals that were intentionally vulnerable and were really vulnerabilities were not accepted" The thing is there is no way you can actually know that. So this is not some kind of revenge. This is rather a valid precaution.


>The thing is there is no way you can actually know that.

We do know since the researchers have told the linux community, the university and IEEE what those patches where. Please do not spread further misinformation about the case.

Please read the IEEE statement and the full Linux TAB review.

https://www.ieee-security.org/TC/SP2021/downloads/2021_PC_St...

https://lkml.org/lkml/2021/5/5/1244


Not trying to badmouth the university here, but having to trust the statement of those who broke your trust in the first place doesn't meet my definition of "knowing" something.

Maybe "believe" would be better used here? Knowing would mean to know precisely what each of these changes does and whether they open up new vulnerabilities and then having confidence that all is well. Gaining this confidence requires work. And unless you put that work in, you are left to trusting/believing.

(Edit: what I mean here is that the researchers could be totally nice and ethical people, but the Linux devs would still have to either take a risk by trusting them OR put in the work to check it all OR decide not to put in that work)


80 developers reviewing for a month isn't enough work for you? They just put more man-hours into reviewing these commits than the contributors put into writing them.


It'll be interesting to see their next paper, where they describe in great depth how to waste 80 developers time for a month and still sneak in vulnerabilities.


if 80 devs all read the same part same way, it won't make much of a difference.

maybe the issue is we aren't using static analysis enough / don't have enough static analysis tools?


> researchers could be totally nice and ethical people.

A thorough review of the IRB documents revealed potential problems in the description of the experiments, and concluded that insufficient details about the experimental study were provided to the IRB. [0]

[0]Read the above linked IEEE response latter.


I just read that IEEE statement, I'm glad someone else has started noticing the paper was bullshit in addition to unethical:

> Investigation of these patches revealed that the description provided by the authors in the paper is ambiguous and in some cases misleading. The experiments do not provide convincing evidence to substantiate the main claims in the paper and the technical contributions of the work need to be revisited.

Interesting that they list ethical considerations added to the review process, but are not adding content quality considerations to the review process. I think that's at least as embarrassing to PC. You can say that they erred in assuming the uni covered the ethics review, but what are they doing if they're accepting papers without checking that the papers support their own claims?


Yeah, the most hilarious part of the TAB report is that one of the hypocrite commits which was supposed to be incorrect was in fact accidentally correct because the authors failed to understand how the code worked. This was also the only commit which got accepted. This makes the conclusions and way data was presented in the paper extremely questionable.


> This makes the conclusions and way data was presented in the paper extremely questionable.

This makes them invalid. Their entire claim is based on malicious code entering the kernel, not anonymous/fake name commits (which is a separate issue). If you take that away there is no actual paper, just a hypothesis which everyone already knew and thus does not warrant this much fanfare. They added nothing to the research field, bothered people with it while contributing to no-one but themselves.

This is what we around here now call "Diederik Stapelen" (since he is a infamous example here): faking you data, using people to do/in your work, presenting it as true and gaining from it. It is omnipresent in some fields and in my opinion should be met with severe repercussions as it damages everyone else who was not part of it.


Err the PC is already supposed to vet for quality. That they failed to do so adequately is not because the policies omitted that requirement, but probably more because Oakland receives many many submissions


This is awful:

> Based on the overall positive reviews and the recommendation of all reviewers to accept the work, the PC did not discuss this paper during the online PC meeting; [...] When, after acceptance, the authors tweeted the abstract of the work in November 2020, several people expressed concerns about human-subject research featured in this work. At that time, the PC chairs discussed these concerns [...]. As a result of these discussions, the PC chairs asked the authors to clarify the experiments with the University of Minnesota Institutional Review Board (IRB). We now acknowledge that this offer was a mistake

Basically: they did not review it and once it was published and people complained, they reacted. What's the point of the PC then?


What really happened (to the best of my understanding):

- Researchers submitted paper to IEEE.

- Researcher twitter about it.

- Tweet was deleted, because people pointed out it was bad humans subject researcher. (consent and deception)

- Other researchers not from UNM, filled complaints to IEEE.

- Researcher mislead (so far seems like) IRB, arguably IRB failed to do a job and just rubber stamped human subject research exemption, after research was conducted ...

- Paper got accepted to IEEE.

- Researchers push more patches to Linux kernel.

- Plonk email from Greg.

- UNM response latter indirectly blaming only researchers but not IRB.

- Paper get retracted from IEEE

- IEEE Response letter.

- We are here.


Well, apparently they needed the bad publicity and "overreaction" to actually do that.

The fact that they didn't communicate _clearly_ with the kernel about exactly which patches this was about at the time when they announced their paper is extremely icky, and makes my sympathy for later misunderstandings/misinformation very limited.


The entire incident has been disappointing.

The researchers conducting the research thinking this was a good idea, the IRB review process at the UMN, IEEE accepting the IRB exception after ethical concerns where raised, and the overreaction from gregkh on the LKML.

Hopefully there is a silver lining and we see better research collaboration between the kernel devs and researchers going forward. IEEE has a job to do around all of this going forward.


> the overreaction from gregkh on the LKML

I don't think it was an overreaction. I think it was a very valid reaction.


The sort of reaction an emperor has upon finding their shiny new clothes don't actually exist. It's bad the tailors sold fake clothes, but it's also bad the emperor and co. didn't notice sooner.

Here we saw the emperor go on a full war path against the entire place of origin of a couple of misguided individuals. Not a proportionate response.


Institutional Review Board dropped the ball and failed at their job. Which reflects on the institution. It's not just about individuals.


Eh, maybe, but at the same time, I kind of got the feeling that the researchers were treating the IRB as one more layer of security to get around.

The IRB is a gatekeeper, yes, but it is also a resource. You should be working with the IRB to make sure everything you are doing is above the board, because ultimately, if you do something unethical or harmful to individuals or society, that's still on you, even if you got your plan stamped by the IRB. You shouldn't have an antagonistic relationship where you use the vague and technically accurate but misleading descriptions of your research an an attempt to get a "get out of consequences free" card.


> Eh, maybe, but at the same time, I kind of got the feeling that the researchers were treating the IRB as one more layer of security to get around.

Which suggests an entirely new line of inquiry: testing IRB procedures to see whether unethical proposals will be approved if described in misleading terms.

While we're at it, we should check to see whether proposals ostensibly from people who are not faculty or students, or who perhaps don't even exist, are ever approved.

Regardless of the results, we can conclude that having to sign a statement promising to never do unethical research before submitting a proposal, and insisting on identity verification before proposals are reviewed, will lower the chances of unethical research proposals slipping through.


The emperor doesn't lack for clothing its just only bulletproof up to a point and a lot of security is actually down to trust and so when an apparently trusted individual walked up and put a rifle to the emperors chest the bullet got through. Nothing of value was learned. Everyone on earth knew a bad patch pushed by a trusted party could potentially get through.


There are a few issues with blaming someone for sending known malicious patches which just causes confusion and is outright wrong. Brad Spengler is a... character, but I do agree with him that greg started out on this wrong. (He also goes a bit further with his criticism that I don't agree with).

However yes, review of the patches was proper but all this could have been done with less unfounded accusations from gregs side.


From the outside, Greg appears to behave like a saint through this enitre process. The university, professors and students involved in this calamity have demonatrated precisely why software developers really need to be bound by ethics.


Can you provide specifics what do you mean exactly by unfounded accusations ?

Really curious who makes unfunded accusations ...


https://lore.kernel.org/linux-nfs/YH5%2Fi7OvsjSmqADv@kroah.c...

And the resulting conversations between Aditya Pakki and Greg. Aditya was never part of the hypocrite commit research, accusing them for this is just bad.


The "hypocrite commit" project was using the kernel developers to further Kangjie Lu's career goals. Aditya was using the kernel developers' time to further his research project, his static analyzer. There's no material difference.


That's the problem with bad-faith action by an institution... It casts a shadow on the actions of all the other agents of the institution.

Greg's concerns proved to be overblown, but at the time they were raised he had reason to believe them valid.


That's easy to say now.

At the time, UMC had not "debriefed" the kernel maintainers they lied to in the "hypocrite commits".

And then Pakki didn't mention that these new patches were generated by a tool, and submitted a bunch of "nonsense" patches with no explanation.

What reason did GKH have to trust anyone associated with the "hypocrite commits" at that point?

(ps. don't think you deserve all the downvotes here...)


I said the same thing then. The pre-print was public at that point and I had been following it since December, sharing thesis adviser doesn't make you associated with research conducted months prior.

Yes, they are bad patches and the research is also questionable. But this is not bad faith nor malicious.


Are you Aditya? You write quite a bit like them.

Do you want to take a moment and explain how your broken static anylizer isn't involved?


Please google me. You'll quickly learn I'm a 26 year old Norwegian FOSS maintainer named "Morten Linderud".

Or, y'know. Click on the profile.


Passionate! I like it, Morten. It's rare.

I did click your profile, but this is a post about deception. Trust no one.


They lied in their abstract: https://twitter.com/SarahJamieLewis/status/13848760502079406...

They lied to their IRB.

What evidence do you have that they're telling the truth now?


> We do know since the researchers have told the linux community

Replace researchers with hackers, CIA, North Korea, China, etc. Do you still have the warm fuzzies?

The fact of the matter is they broke the trust and your expectation is that since they've been exposed they can be trusted again?

No, if anything the event has shown that additional vetting and layers of scrutiny might be needed to protect against bad actors in the future should they be malicious.

* I type good.


Sorry, why would you believe the words of bad actors who are known to lie? Can you be absolutely certain this isn't simply a continuation of their twisted "experiment"?


>We do know since the researchers have told the linux community, the university and IEEE what those patches where. Please do not spread further misinformation about the case.

We should just trust the liars when they pinky swear to tell the truth? There's a certain saying about what you win when you play shitty games that comes to mind. Being deceptive causes people to lose trust in you.

Or, my personal favorite: fuck around, find out. UMN fucked around, now they're finding out.


"Please do not spread further misinformation about the case."

Let's assume that most people are giving their opinions to their best knowledge. We shall be careful when telling someone to stop spreading misinformation as this is how fascism starts. "I am right, you are wrong, stop talking!"


Even when people are giving their opinions to their best knowledge they can be spreading misinformation.


Calling everything that's at odds with ones beliefs misinformation is stupid in the first place. And I believe the word is used precisely in this sense most of the time. Definitely here.


Is it forbidden to do mistakes? In such cases is it a good idea to respond to your colleagues with "Please stop doing mistakes!" as the parent did?


No, but it may be reasonable to ask 'please do not repeat this particular mistake'


But in the particular case, the discussion centers around a matter of opinion on "should trust be extended to this particular set of statements from a known deciever".

Throwing around accusations of spreading misinformation to further your opinion in an argument makes it harder to call out real misinformation.


Did you ever say such thing to someone who committed an error? How would you feel if someone (your partner, your colleague, your boss) told you such thing?


I can only suggest that you stop spreading your misbeliefs about the case. And also stop using the buzzwords the meaning of which you apparently do not understand.


Ah, so we're just supposed to trust the same people who tried introducing the vulnerabilities in the first place...


These are not independent malicious actors. These are researchers and students at a University. FUD does not serve any purpose here.

And for full disclosure, I'm one of the four authors of the original complaint to IEEE back in December about the research. I fully believe all the facts have been put forth and there is no reason to spread misinformation about the incident.


>These are not independent malicious actors. These are researchers and students at a University.

They wanted to prove that others are too trusting, they got exactly what they wanted: heightened suspicion to things that are normally expected to be done in good faith.

I understand that inside the academic world the status imparted by "researchers and students at a University" is significant and important. For those of us outside the academic world, it's prudent and has no downside for us to drop our perception of them below the floor you'll continue granting them.


This situation is in combination with the longstanding disconnect between academic researchers and applied sciences folks. I say “disconnect” as if that were a two way street, but most of the spherical cow thinkers are on the academic side. People doing real work bristle at their lack of exposure to reality. I worked while still in school and the difference was literally and figuratively night and day. I quickly became an informed consumer and it meant that sometimes in class I was sitting on my hands to keep from disagreeing with the teachers about why something is done a certain way. It was shocking how often they were wrong, and occasionally ass backward. Ultimately I couldn’t get out of there fast enough.

But season this old recipe heavily with some “fuck around and find out” and things get pretty spicy.


The fact that this paper was in the best 5% of papers submitted to this IEEE conference[1] tells you all you need to know about academia.

[1] https://www.ieee-security.org/TC/SP2021/downloads/2021_PC_St...


That's exactly the opposite of what the initial comment of this thread suggests.


Christ the snark is infuriating.

And pointless?


Just from a code quality process standpoint, that’s an interesting result. Now I’m wondering what would happen if you picked a set of 150 random kernel patches and told 80 reviewers to re-review them assuming they could be malicious. I bet you’d find quite a few fixes.


Probably, but the fact that maintainers are stretched thin when it comes to time for code review is not really news.


That is the openBSD model. Or at least it was 10 years ago when I last looked into it.


I was wondering the same thing, and it appears it would be worth the effort, but getting enough high quality reviewers would be a problem.

Maybe the NCAA could organize competitive code reviewing leagues? I bet you would get e.g. a highly motivated Caltech team reviewing USC contributed patches, and vice versa.


I'd suggest it sounds like a perfect project for some university students, but that may not go over very well right now.


This would be an interesting study.


Isn't the moral of the story here that it's probably trival for organizations like the FSB/NSA/Chinese equivalent to get malicious patches accepted into Linux.


Not after Linux banned University of Minnesota from submitting patches. We're safe now.


They already have malicious backdoors at bios level ref: intel-me


As usual, The Onion anticipated this: https://youtu.be/FpN_RjIaVw8


Maybe the people reviewing the changes should be unaware of who made them. I am biased and for sure look more closely at some developers pull requests than others.


For the sake of spotting malicious actors, wouldn't it make more sense for reviewers to be aware, so they can focus their attention on patches from untrusted sources?


honestly fsck these "pen testers", spend some time trying to make things better rather than trying to break shit and finger pointing.

Starting to feel like pen testing is a broken profession.


These were not pen testers by profession.


So what are we doing about all the maintainers that initially approved these commits that were later found to be incorrect?


the same thing we do to a carpenter whose door get broken into in a robbery: nothing unless they were inexcusably incompetent or complicit.


The moral of this story is this: whatever you do, don't be the dweeb that gets their code closely reviewed by the kernel maintainers. (You are actually better off having it reviewed tho)


Anyone else notice the site linked has the most crazy racist wigged out psychos on the comments/forums? Yikes.


The guilt by association here is now approaching Biblical scales. Like the guy in the comments who wants to close down the entire Computer Science and Electrical Engineering departments at UMN, which probably employ/educate the best part of a thousand people. Ha!

In general, the fury and seethe which this experiment inspired is amazing. IMO the real disgrace is not the experiment itself, but the response. The kernel developers need to stop being martyrs and playing blame games. They need to be rational and take responsibility for improving their own procedures. Because, if they didn't already have them, governments now have entire departments studying how to use deliberate vulnerabilities in open source projects, for military intelligence and other purposes. And they will not be deterred by the continued public flogging of the University of Minnesota.


They lied in their abstract [0], they lied to their IRB, and Kangjie Lu's aspirations have been wasting a lot of Linux maintainer time.

The real disgrace was the experiment. The response was fairly natural for humans with imperfect information being experimented on.

Please be more specific. How do you think the Linux kernel maintainers should improve their procedures to prevent clownshows like this in the future? No points for mentioning things that they already do or assuming infinite maintainer hours.

[0]: https://twitter.com/SarahJamieLewis/status/13848760502079406...


I just don't understand why open source software can continue with the assumption that every single contributor is honest. Imagine if this philosophy was applied to, say, cryptography or network security. What would be the state of encryption algorithms and key exchange protocols and so on, if their developers had a meltdown at the mere suggestion of there existing a liar?

Since the Linux kernel is installed on many millions of computers, it is obviously pretty important that it doesn't have bugs in it. Certainly not malicious bugs. And if all it takes is a couple grad students and an assistant prof to get them in...well, that reflects very poorly on the state of kernel maintenance, to me. Which seems far more important and deserving of attention, than endlessly arraigning three clueless guys at some university.

I'm not in a position to be more specific about what should be fixed. But, what would your answer be to your query? Apparently, do nothing, and assume that everyone in the world is honest, while writing self-indulgent "public letters" about it? How is that going to help when the CCP tries to insert surveillance into the kernel? Or when Russian hackers try to get exploits and ransomware in there?


> assumption that every single contributor is honest.

There is no such assumption and it has been well known for a long time that such an assumption would be harmful.

> if all it takes is a couple grad students and an assistant prof to get them in

There were 0 malicious commits that made it through the review process (since the paper was incompetent as well as unethical.)

You seem to be missing some basic facts here. Filling in those gap would help you partipate more productively in the conversation.


> I just don't understand why open source software can continue with the assumption that every single contributor is honest

It doesn't. They've been on the lookout since before 2003: https://lwn.net/Articles/57135/ (there are other examples, this is just the earliest I know of)

> Apparently, do nothing, and assume that everyone in the world is honest, while writing self-indulgent "public letters" about it?

This sounds overly dramatic which makes discussion difficult. My answer would be that the kernel maintainers have known about this threat vector for a very long time and seem to be doing a reasonable job of repelling it.


> I just don't understand why open source software can continue with the assumption that every single contributor is honest

It's not a bare assumption of honestly, it's an established relationship of trust. If you have zero trust then you cannot have any collaboration with other humans; it would be definition take as much or more effort than any creation to verify that the creation is fully safe in an current and future potential contexts.

That trust with the university was broken, and in doing so their work has been reviewed and oft rejected.


Most maintainers are volunteer, meaning that they spend their evenings, weekends and holidays developing the kernel because they like it, it gives them a sense of worth and fulfillment that can be hard to come across.

I don't know if you've ever talked to one, but they take a real pride in their work, most make a pitiful salary in comparison to FAANG levels but they still do it because it is full of interesting challenges you can't find anywhere else.

UMN broke that trust. No one is asking the departements to close least of all the maintainers, but you have to understand that kernel devs are not a faceless machine. They spend their limited time on something that is used to make a prodigious amount of money, while never really getting any. In that context they are more than within their rights to be livid.


I don't dispute that the researchers were dishonest and broke the trust of the kernel maintainers. So they can be a bit peeved off at UMN. But is this sort of thing not the reality of their operation? That liars exist in the world? I don't really see what the volunteer part has to do with it. If anything it means they have less to complain about. I mean...most soldiers are volunteers, which would make it even more absurd, if someone signed up to be a soldier, and then whined and complained when enemy soldiers shot at him. It seems equally obvious to me, that an open network should be assumed to have malicious actors on it, as that a battlefield should be assumed to have enemies on it. Obviously you don't have to like the enemies.


I don't have internal info so this is just an hypothesis, but I think they absolutely do expect adversarial commits and if you tried to get something accepted today with no affiliation or anything you would need to talk to multiple maintainers and your commits would be under scrutiny.

Here the "contributors" had done multiple commits and were coming from a university that had previously upstreamed several commits. There was and should be an expectation of trust because you can't scrutinize every commit for several hours (they just don't have them enough maintainers for it).


Trust is not a binary.

You must trust contributors to your project to some extent. If you don't extend some trust, you can't have contributors. That level of trust is then adjusted off that base level based on experience.

It is perfectly reasonable to drop someone below your base level of trust if they lie to you. This doesn't necessarily mean that the base level of trust needs to be adjusted.

In this case, the review process caught all the known harmful commits (which were from anonymous emails so recieved base level trust) and thus the base line level of trust seems to be working.


In that context they are more than within their rights to be livid.

They do, and to a very largr extent the same exact overreaction happens in private orgs and we just get to see it when it's the Linux kernel. But at the end of the day, there was an overreaction, and it was public. Both the UMN and the kernel maintainers need to step up and make improvements now, and move forward with level heads.

Have any of the kernel maintainers acknowledged the primary concern behind the misguided research? That is, (quoting parent comment) "governments now have entire departments studying how to use deliberate vulnerabilities in open source projects"?


I don't think it's really fair to pin some random person's bad take on the kernel developers. disproportionate calls for mob justice have always and will always be popular, especially on the internet where it's very easy to leave casual comments without thinking about it too hard (and you can never really tell someones age)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: