Greg Kroah-Hartman dropped a bombshell on the open source software community by banning users with University of Minnesota email addresses or known affiliations from contributing to the Linux kernel project.
I'll come out and say that I'm on the side of the Linux kernel community, but their reaction may have been a bit much.
But I think it's worth walking through exactly what happened and why the reaction was so visceral.
What is the Linux Kernel?
The kernel is the core component of an operating system (the thing called "Windows" or "MacOS"). It is the part of a computer that translates user actions into hardware instructions, which basically means that it has complete control over the system: no action makes it into memory or gets allocated processing power unless the kernel sends that information to the hardware.
Bugs in a system's kernel are often catastrophic. A person could use an exploit to elevate their privileges to administrator level for any operation, allowing them to do anything they wanted on the computer. They could mine bitcoin, read your passwords as you type them, or just delete files randomly. The possibilities are endless.
Linux is an operating system. Most people user Windows or MacOS machines in the their personal lives, but Linux runs almost everything else in the world. Almost all of the servers that make up "the cloud" are using some form of Linux, and thus the world's most critical infrastructure runs on Linux.
The Linux kernel is therefore the piece of software that lets the world's most important software actually run tasks on hardware. When bugs are found in the kernel. It's not an overstatement to say that the Linux kernel is one of the most important pieces of software in the world.
How is the Linux Kernel Developed?
The Linux kernel is an open source project developed over email exchanges, most (if not all) of which are publicly visible. While most developers are familiar with the GitHub approach to reviewing code, few young developers know how to interact with email-based reviews. I am not one of them!
But the principles of development don't change:
- A contributor makes a fix or new feature
- They then send the set of code changes (a "patch") to the email list with a description of what the change aims to accomplish
- People review the patch. This and the previous step can repeat many times
- A person with exceptional knowledge of the area of code being changed approves or rejects the patch. If approved, the patch is merged into main codebase
Some developers may disagree with my wording here, but I'm not writing this for you. I'm trying to explain this to my mom.
What Did UMN Researchers Do?
Researchers at the University of Minnesota decided to experiment on the Linux kernel community by introducing "hypocrite commits", or code changes that claimed to do one thing but expose a security vulnerability as a side effect. These changes were small so as to not take too much time from the project maintainers, did actually include a fix for a real bug in the patch, and they intended to intercept the approval process before code could be merged into the main codebase.
In their paper, they claim that this research does not involve human research but only examines the patching process (this is on page 9), and the university's review board agreed with them.
You can imagine the Linux kernel community was pretty upset when this paper came out.
Maybe most insulting to the software community (outside of not being considered humans, of course), the paper's conclusion was that open source projects should add a clause to their contributing guidelines requiring that contributors "agree to not intend to introduce bugs". As if the "Are you a terrorist?" checkbox on the customs form has ever caught anyone.
What is White Hat Hacking?
Intentionally introducing bugs into a system or seeking exploits in that system is bad, right? Why is a university bragging about breaking into the Linux kernel?
Security researchers often fancy themselves as white hat hackers, or people who attack systems to find vulnerabilities and help the victims fix the holes they find. But they usually don't follow the standards of ethical hacking and end up solidly in the gray or even black hat category of hackers.
A white hat hacker asks for permission to try to exploit a system. If and when they find an issue, they notify the party and try to help them find a solution. Black hat hackers do not ask for permission, and when they do find an exploit, they do not notify the victim and instead use that information for some personal gain.
Gray hat hackers do not use the exploit for personal gain, but they usually don't ask for permission and often don't notify the victim that an exploit was found. A gray hat hacker's goal is to find an exploit, not to use it or fix it. And it turns out most computer science research falls into this category (or I would assume based on the papers I've read).
Today
The problem is that you can't know the hacker's hat color when you are being attacked, so you must assume the worst. And when a gray or white hat hacker comes along and finds an exploit pertaining to a specific thing, you go looking for places that things has been used in the past and become immediately suspicious of its actions.
This is what the University of Minnesota brought upon itself. By not asking for permission to experiment on the behavioral patterns of a group of humans and threatening the security of one of the most important piece of software on the planet, they forced the Linux kernel maintainers to go through all previous work submitted by the university.
But the straw that broke the camel's back was a patch submitted by a researcher with the same advisor as the one who published the initial paper. The patch itself did nothing useful, but because it was from the University of Minnesota, the kernel community had to assume this patch had not been made in good faith, and decided to set a precedent for organizations who wanted to experiment on them again:
Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems.
Other Opinions
While I do think that the Linux kernel's reaction was probably correct (no open source project has had the nuts to stand up to contributors like this before), I think it's worth examining how much work they put on themselves. Some pretty prominent people are coming out and saying that the ban may have been understandable, but the removal of previous work is only going to cause more issues. Here is one choice part of a very good thread on Twitter:
There were several other important things brought up: reverting previous submissions risks re-exposing vulnerabilities. If a version of the Linux kernel ships with one of these vulnerabilities, attackers will not only know that it exists but likely have the tools lying around that used to work before that vulnerability existed! The maintainers of the kernel have given themselves hundreds of hours of work to fix something that isn't broken.
To top it off: the patches that the paper referenced never made it into the main codebase, as the researchers did make sure to prevent them from being given the final approval. I've seen some people come out and say that this by itself should prevent them from being banned: no harm, no foul. Many people countered this by noting that this group decided to prevent the patches from merging; what happens when a group does the "next step" of research and lets known vulnerabilities get merged?