Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Should Mozilla (or any other group) get to decide what I can or cannot do to a process running on my hardware, even if it originated from them? Things like this make me feel uneasy because it seems to be sitting on the edge of a slippery slope.

Another example of these "silently user-hostile" applications is VirtualBox; it spawns multiple processes in a similar way, and even resists attempts to debug it if it's already running. I remember being incensed enough by it complaining about my --- patched to make the OS work for me and no others --- system DLLs not matching its internal whitelist and not starting, but not after I exercised the same skills to patch out those checks in it.



I don't understand how you can be so negative.

The title literally says it is giving users that power. This is empowering you, by giving you the ability to see which DLLs are being injected into Firefox.

They do block some by default, certainly, but only ones which in practice just crash Firefox (often because they are old).


To need to be "giving" something that should be, for lack of better phrasing, a given, is not a good sign.

They do block some by default, certainly, but only ones which in practice just crash Firefox (often because they are old).

Is there a way to unblock them as easily as blocking them?


Yeah it should be a given that you have control over this. Blame your OS for the fact that you don't have it by default. It doesn't make any sense to get mad at firefox for implementing this. They're not interfering with your control. If they did nothing, you would have no control.


Interestingly enough, Windows has this feature since Windows 10, but the PE flag disabling injection has to be set or an exception in security settings added to engage it.

The latter is how Microsoft secures Protected Container in Edge.

It's not particularly easy to set up and is just a complete block. The options are called Arbitrary Code Guard, Code Integrity Guard and (for old versions) Disable extension points.

https://learn.microsoft.com/en-us/microsoft-365/security/def...


I can't test this, but I suppose it should be as easy as going to about:third-party and clicking a button next to a blocked entry. Probably preemptively learning about how to reset this if Firefox won't start anymore.

As long as it's all end user-controlled, I suppose the only issue is awareness that this feature exists and how it's controlled (this is the first time I've read about it).

Also, it feels somewhat weird to have this function in an application rather than at the OS level (or as a independent process firewall application) - but I suppose this is fine.


That would imply I had the choice to begin with. On my work PC, they inject various plugins to enforce security policy. I imagine if I tried to circumvent that, the security software would just terminate the browser entirely. Not going to try, it's not worth it for me.


It looks like you get to decide which DLL can stay and which one will be blocked from loading. When do you expect the shiny ball of control to be taken away from users?


Have you ever maintained a product popular enough that lots of people try to hack into and abuse? If yes, you should understand the reasons for protective measures like this.


Like Chrome, Windows, Android, iOs etc? We all understand that companies wants to protect us from "evildoers". But its a similar situation as with any censorship.

Why not allow users to decide themselves?


that's exactly what they did. Have you read the article?


Did we read different articles?

This functionality is enabled by default. That`s why I`m asking - why don't ask the user first?

Upd: please be civil, this is a discussion and not a competition.


Well, as the article said, there is an user-selectable option, so the answer to "Why not allow users to decide themselves?" is "they did"

As for default, that's a different question unrelated to user freedom. Unfortunately in Windows, DLL injection is very often abused for no good reason. Imagine a simple, non-technical user getting asked about permitting mouse driver DLL.. what would user think? With only information being DLL name/description, they might think that mouse will not work without it and permit it (see https://news.ycombinator.com/item?id=36681873 for real reason). And once firefox starts crashing, will they know how to disable it?

No, it is much safer to disable it all by default. And if a powrer user needs a specific DLL, they can enable it. As a bonus, they will learn where the dialog is, so they can disable it back once the crashes start.


This is really minor compared to other parts of Firefox. Notably, Firefox requires all extensions to be signed by Mozilla.

And the only way to turn that restriction off, is to either use Firefox Nightly or Firefox Developer Edition (which is a beta). If you want to use stable Firefox, because you like having a stable web browser, you just can't turn the restriction off. Period. The closest thing you can do is installing it as a "temporary extension" which uninstalls itself when the browser restarts.

It's kind of ridiculous that the default browser of most Linux distros has a Apple-esque mentality of "we get to tell you what you're allowed to install."


I think it's unreasonable to assume the vast, vast majority of users, including technically literate ones, are likely to have an in detail understanding of any specific software's installation.

Requiring a few hoops - and it sounds like e.g. requiring a developer edition sounds like such a hoop - to ensure a likely misconfiguration was actually intentional and the user is capable of dealing with the consequences is not a bad idea.

In particular, debugging when things go wrong can be an insurmountable task.

Better isolation of software components has been a trend for decades; to the point where I think we can safely say that the old unix and windows model of permissions was a fundamentally insufficient idea. Devs flock to using VMs and containers precisely because uncontrolled interaction between stuff even controlled by the same nominal "user" is a huge pain - and that's before malware and privacy concerns come in.

Were software more isolated by default, and interaction more controlled and/or explicit, then indeed I think the argument against this kind of controlling-the-"user" features would be stronger. But as is? The alternative is clearly much, much worse.

After all, certainly here and often in other cases too - it's not like it's actually impossible to circumvent these restrictions. It's simply technically inconvenient in a way that happens to also prevent many unintentional bugs and some malware vectors.

In an ideal world, a devs for a piece of software would be hard-pressed to even do this, let alone feel the need to do it. But that's just not the world we live in; we're not even really close yet - except on really locked down platforms that go much further than needed to prevent the risks, and into the territory of quite openly restricting the user, not the software.


> Requiring a few hoops - and it sounds like e.g. requiring a developer edition sounds like such a hoop - to ensure a likely misconfiguration was actually intentional and the user is capable of dealing with the consequences is not a bad idea.

Except when such hoops then start being used as evidence the user is an undesirable and should be kept away from various services. See e.g. many Android apps refusing to work on rooted phones.

I specifically don't like bucketing things like these under "development" label - "dev mode", "dev build", "dev edition", etc. - because it creates the idea that those "dev capabilities" are there to help developers with development, and should very much not be used for non-development things.


I share your concern here, but I can't see a resolution to that being allowing every bit of software to alter any and all user data and alter the execution of any other software a user is running. That's where we came from, and the number of untrustworthy dependencies is so large nowadays that this kind of approach is both unsafe, but even without malware it's also unreliable and unpredictable.

There _will_ be constraints on running programs altering other stuff. Sandboxes _will_ get even stricter. The benefits are so large that this trend will inevitably continue, and rightly so.

To protect the ability to tinker we'll need to instead talk about who gets to control those sandboxes, ultimately. How can we poke holes without allowing abuse by malware or creative (ab)use rendering the system pointless? How can we ensure the poked-holes exist by user choice, not at the behest of a tiny handful of software behemoths?

On a technical level, I don't think that arbitrary and surreptitious dll injection is a line in the sand worth defending. It's not a great abstraction; it's tech debt.


> On a technical level, I don't think that arbitrary and surreptitious dll injection is a line in the sand worth defending. It's not a great abstraction; it's tech debt.

I think giving DLL injections up isn't really solving anything. DLL injection isn't an accident of history - it's a solution to a specific problem. The problem won't disappear if you remove the solution.

The issue in question is that it's also broadly useful to allow software to be modified by third parties in arbitrary ways, without the involvement or cooperation of the software vendor. In fact, the security folks are a major users of this capability - that's how malware scanners work, that's how emergency hot-fixing is done, that's how compliance systems work - and moving towards more evil/dystopian use cases, this capability is needed by anti-cheat systems, DRM, and the modern digital surveillance economy.

All this means that, if you eliminate the current way to let third parties inject code into programs, you'll soon be forced to create the equivalent mechanism yourself. The sandboxes will get stricter to protect users from criminals, and then they'll have holes poked in them to accommodate legitimate actors, both good and bad. Problem is, going through the whole dance of making a sandbox and then making it leaky, is that it generates bloat. You end up roughly in the same place you started, just with an extra layer of abstraction on top.

(And, of course, all those abstraction layers have bugs in them, so the attack surface for criminals is getting larger in the process.)


It's open source software. Comment out the whole feature if you want.


Patching the binary is easier than trying to rebuild Firefox (especially if you don't want anything else to potentially change.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: