Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, there wasn't. No malware was being spread en masse this way. It was an entirely fictitious threat, as it would have required convincing the user to enter a confusing part of the app (about:config) and correctly change a setting.

I can accept that there might have been a malware problem when signatures weren't the default requirement, but that's not the change I'm talking about, where they ignore a user's explicit preference that they're okay with unsigned add-ons.

Remember, Chrome allows you to turn off this setting by just flipping a switch into dev mode. Where's the torrent of compromised Chrome browsers from this vector?

>I don't think asking power-users to use Nightly/"unbranded" versions of Firefox to load unsigned extensions is something you should count against them.

It is when that version doesn't get the same updates and has to be maintained separately.



Hi I work on a security team that hunts for malware. Malicious extensions are a huge threat - totally happens for Chrome in particular, and I've even seen malware package old versions of browsers to get around the modern defenses.


Someone distributing a different product to get around you current products vulnerabilities is outside this threat model.


No it isn't, there isn't any specified threat model anyway.

What we're talking about is whether malicious extensions are something attackers want to use. Having to package an entire browser is a win - it's super noisy and means there's a huge binary to lug around.


>No it isn't, there isn't any specified threat model anyway.

Well, yeah, in the sense that Mozilla people don't really think through what threat model they're protecting against here.

>What we're talking about is whether malicious extensions are something attackers want to use. Having to package an entire browser is a win - it's super noisy and means there's a huge binary to lug around.

The vast majority of that benefit comes from the default requirement for code to be signed, not from the barely measureable fraction of users that knowningly disable this protection and then get pwned.


I think you're missing my point, so let's specify a bit more of a threat model.

Attacker has code execution on your system and wants to maintain persistence and exfiltrate sensitive browser data. Sounds reasonable for Mozilla - at least, it's not totally nuts of them to consider this in their threat model.

One avenue, and a popular one, is to then sideload a malicious extension. An attacker who can disable the extension check can do this easily. An attacker who can't has to resort to other means - packaging a separate payload to host the extension.

Does that sound reasonable? I don't want to argue, just to explain my perspective on this issue based on the attacks I have seen.


I'm referring to native malware abusing admin privileges to install extensions without the user's consent, not users deliberately installing malware extensions themselves. This was particularly bad in the XP/Vista era (I know I had to remove some rogue extensions from my relative's computers during that time). If the signature check were a flag the malware would just disable it at the same time it installed the extension (remember, it has admin privileges so no user interaction would be required).

Additionally, keep in mind that when the signature requirement was added there were still XUL/XPCOM extensions which could hook the browser much deeper than Chrome-style extensions and wreak much more havok.


>I'm referring to native malware abusing admin privileges to install extensions without the user's consent, not users deliberately installing malware extensions themselves.

Ah, so a vector that required code signing doesn't protect against.


Signatures absolutely do protect the user in this scenario. With mandatory signatures you can't get (obvious) malware into the browser without having it first approved by Mozilla (who should reject it upon review).


Okay I misunderstood what you meant by native malware.

But if your threat model is that extensions can be added without the user's consent, then that is the vulnerability you should fix. And it still wouldn't justify blocking a user who is aware of the risk and chooses to disable that layer of default protection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: