Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I understand that there are significant security issues with browser extensions

Sure, there are significant security issues. Since when is that a reason to disable a user-originated threat vector? As long as the user in question is informed and alerted to the potential security risks shouldn't it be within the user's purview to decide what to allow and what not?

Even if this line of argument cannot be said for every purpose - and I understand there is a potential for abuse and misinformation about misrepresenting any specific extension - but for instance say Tampermonkey, which the user needs to first download from the webstore then the user needs to also download the script, can't there be sufficient warning that what the user is doing is potentially harmful? Or a requirement from the webstore that Tampermonkey somehow also alert users before loading any remote scripts?

If a user decides to downright download malware we let them (Given, Chrome would block it but you can override that manually) why wouldn't the user be allowed to override these risks (i.e. you can enable it only in development mode etc. with a requirement that such extensions provide sufficient warnings etc.)



> As long as the user in question is informed and alerted to the potential security risks shouldn't it be within the user's purview to decide what to allow and what not?

In a purely ethical sense, absolutely yes. But we're coming up on 30 years of people using the internet, and it's been clear for a while now that you simply cannot teach people what is safe and what is not in an absolute sense. Even highly technical, experienced users still make mistakes and install malware sometimes. Informing users is not effective, to a first approximation. System designers have a responsibility to protect the users of their systems.

> If a user decides to downright download malware we let them (Given, Chrome would block it but you can override that manually) why wouldn't the user be allowed to override these risks (i.e. you can enable it only in development mode etc. with a requirement that such extensions provide sufficient warnings etc.)

These overrides should exist, but making them too easy will just lead to malware tricking people into triggering them. And some doors are so frequently abused or fundamentally vulnerable that they probably should be closed entirely — most people don't think we should still be embedding Java and Flash in webpages these days.


The problem with closing the API is that for even the most advanced users, it will now be impossible to get the (very useful) functionality these extensions provide.


I believe it was tptacek who once commented half-jokingly that the ideal thing to do would be to leave the API open to uBlock Origin exclusively and kill it for everything else. The privileges required by that extension and others like it are indeed very powerful and often abused. This is not a problem with an easy solution, your suggestions are essentially the status quo and the industry is beginning to move on from it.


The issue with that approach is that the next addon developer who comes up with something as useful as uBlock will not be able to develop that. Legacy carveouts protect past innovations but not future ones.


Hmmm. I’m not implying anything here, but I’m wondering how we know we can trust uBlock Origin. How is its security vetted?


Hmmm. You're invited to vet it yourself here: https://github.com/gorhill/uBlock


This doesn't help if it gets sold to someone else, as many extensions have.


It’s open-source, and the author has a long history of supporting his product for free as an ethical good, even while some of his collaborators have went commercial and cashed out, once even taking the project name.


Mozilla went that way on Android when they broke all extensions about one year ago except a few blessed ones.


There is now a way to enable any extension in Firefox Nightly.


> As long as the user in question is informed and alerted to the potential security risks shouldn't it be within the user's purview to decide what to allow and what not?

If you've been following the Epic v. Apple case, you'd know some folks like Tim Cook strongly believe the answer is no, or in his words "they shouldn't have to [decide]."

The freedom-less future sure seems pretty bleak.


As both a developer and a user, a lot of the software "freedoms" we have are superfluous, duplicative, and inefficient. As technology becomes more and more commoditized and 50,000 vendors all sell roughly the same thing, decision fatigue sets in. The walled gardens provide value not because they remove freedom but because they give you back something most people value more: time.

We don't all have time to sit around evaluating 1,000 similar packages, compiling and debugging them from scratch, just to get a simple app or game working.

The bleeding edge will keep on bleeding, but for the rest of us, good enough is good enough. It doesn't have to be perfect, it just has to work well enough and not add to our already-overwhelmed mental loads.


Well I’m a user and I don’t want to have to make that choice. For example I’m not a big fan of Facebook, but I do use it occasionally to keep in contact with some friends.

Suppose Facebook decided to move to a different third party store on iOS, maybe their own store, so they don’t have to list their data access and sharing policies, don’t have to go through app store review, etc.

Doing that forces me to choose between rigorous app review and disclosure, and using the Facebook app. I don’t want to be put in that position. One of the reasons I use an iPhone is because of that.


Regardless of whether you have the responsibility to make that choice or not, you are still responsible for dealing with the consequences of whatever choice has been made for you.

How is that any better that being responsible for making the choice in the first place?


That's not really a solution. The average user will not read security warnings, and will just click through them. Especially if they're being actively social engineered, the attacker can easily talk them through anything they don't understand.

The people who need this functionality are a very small minority. Tampermonkey has 10M installs which is phenomenal, but Chrome apparently has an estimated 2.5B active users. The tradeoff here is exposing billions of users to unnecessary risk, to save two orders of magnitude fewer of the most tech savvy users from an insignificant annoyance. I.e. just use a browser that supports the extensions you need instead.


This persistent infantilisation of users is going to lead to a crippled world. Basic computer security and hygiene is not rocket science. We should start teaching this from school level so even "ordinary" people can make informed choices rather than being confined to gilded cages whose perimeter will eventually expand to encompass even the tech folk, as the gatekeeping power of these companies grows ever larger, with malware attacks acting as the convenient foil.


Why SHOULDN'T phones and computers be "infantilized" the same way cars, microwaves, iPods, toasters, electricity outlets, etc. are? They're just another appliance, and 99% of people use their devices for information consumption & entertainment rather than content creation. Regular users shouldn't have to give up ease of use to satisfy developer needs. Android is so fragmented today precisely because Google decided to cater to OEMs instead of users first, and like it or not, the walled gardens ARE a lot safer than Windows/Android, regardless of whether you follow security best practices.

Most users don't need that level of power or customization and it just opens up not only attack vectors but general UX confusion. Hide that sort of thing behind an admin mode or whatever, but otherwise, yes, PLEASE hide dangerous functionality so people aren't exposed to them.

Users aren't asking to be treated like children, they just don't want to have to think about zero-days and layers of config menus because some developer in an ivory tower valued "muh freedoms" too much.


What makes you think these skills aren't being taught?


I'd guess the marketshares of Microsoft, Google, Facebook, Apple etc.?


> The average user will not read security warnings

The basic point of my comment is that deprecating the capabilities or the current API just because there can be a security risk in some cases for the average user is harmful to the more advanced users.

You focus on one possible solution mentioned about warning dialogs. However, there are lots of different solutions available which would allow it to run only with knowing 100% that the user knows what they're doing aside from warnings.

There can be a command line flag --developer etc. just like there is for unsafe certs.

'Use a browser that supports it' is not a solution since - as you said - Chrome has the largest user base why would a developer put in the effort to develop a solution for a minorly used browser.

Since the major browser is closing this API there will be no possible use of this type of functionality for anyone - even the users who know what they're doing.


Product liability to the masses likely weighs much heavier as a concern to the developers of Chrome than does any harm done to “advanced users”. Beginner users often do what advanced users tell them too, anyhow.


>>I.e. just use a browser that supports the extensions you need instead.

Wonder how long that'll be an option. I use FF for all my browsing and its amazing how much stuff breaks if you're not using chrome. I'm thinking the re-captcha stuff should be used as evidence in a google anti-trust case :-P "Oh, your on firefox? here...lets identify all these pictures.."


One fun trick is to set your UA in FF to tell websites it’s latest Chrome :)


I remember there used to be plugins for setting the UA to mimic popular browsers. I should see if I can dig one up. Thanks!


Wow, so you believe this is actually being done to protect users from themselves, and not to hamstring adblocking in the future?

I wish I still had that kind of innocence.


The level of low effort negativity on HN these days is just exhausting. Chrome has a stellar track record on security all the way back to the launch, both on an engineering and product level. Why would it be surprising that they're continuing with that?

Malicious extensions are probably the biggest existential threat to the web as a platform. If browser regain the reputation for being insecure that they deservedly had 15 years ago, more and more serious business will move into mobile app walled gardens. This has to be solved.

But rather than accept the simple explanation, we get these inane conspiracy theories on what the true motivation is. No, it's not about ad blockers because the subject of this blog post is not a feature that ad blockers would use. And the same for the half dozen alternative explanations.

Seriously, what is wrong with you people? Can't you at least save the high school level cynicism for situations where an assumption of malice makes some sense?


You're so exhausted! Poor guy!

Got any stats on what percentage of the browser install base has even one extension installed? Then we can come back to your "tHe biGgEsT eXisTenTiAL ThReAt" nonsense.

And how an assumption of malice doesn't make sense when we're talking about the world's biggest ad company doing something in the name of security that happens to cripple ad blockers -- well, suffice to say, you're probably not cut out for journalism.

Now go get some rest?


It’s an oversimplification to chalk it up to some ulterior motive. Design decisions like this are hardly ever unilateral, and there are many competing interests.


> wouldn't the user be allowed to override these risks

It would seem like a useful compromise to have harmful features/functionality disabled by default for each new script - controlled at the individual script level.

If the user wants to invoke/allow the feature, they should be able to explicitly and manually enable it. Like the android app permissions: Allow access to x? [none, this time or always?]


> If the user wants to invoke/allow the feature

And when there are hundreds of features... Scratch that. Even if there are ten features, the user will simply click "allow all" without paying attention to what's requested.


Yah, this is the conundrum of security vs. useability.

More and more Android has been lumping disparate access permissions together with the goal seeming to be KISS.


>shouldn't it be within the user's purview to decide what to allow and what not?

Imagine it this way: The local supermarket is selling a product which claims to be food but actually contains a highly toxic poison. And then someone justifies it still being allowed because it's still up to the buyer to choose to buy and consume it. It's on the user, the should be responsible to look for trusted brands that do not disguise poison as food or they could take it down to their local lab to test it for toxicity before consuming it.

It's obviously completely absurd to expect the user to protect themselves from malice like this which is rampant on browser extension stores.


> Imagine it this way: The local supermarket is selling a product which claims to be food but actually contains a highly toxic poison

Basically anything on sale if had in the wrong dose. Example: a can of a sugary drink is OK, drink 6 of them per day and it's going to be bad for your health.

Let's try a different metaphor.


Okay. How about buying something that was legitimately food when you bought it, but updates to poison - and there's really no choice about allowing updates, since previously safe foodstuffs can become poison if left as purchased - in your cupboard?


I can't tell if you're being sarcastic.


Why would it be sarcastic? Malware authors buy legitimate extensions for mountains of cash and then stick spyware in them. Users have no idea their previously good and trusted extension is now malware.


> informed and alerted

and here I think is the issue - it's hard for a closed source project to fully inform and alert anyone, at all. It's impossible to get a full understanding of a codebase that doesn't allow me to look at it, and I don't see why that would be different here. I can't understand security risks to Chrome (proper Chrome, not Chromium) because I'm not allowed to, therefore any extension at all could be dangerous in some way. So it is actually best to not allow the user to think about security for themself, because they can't have the necessary information. A developer can't look over every extension because they have better things to do with their time, so I think it makes sense that we end up with this incredibly restricted platform.

If your strategy for security can't be having an open and audited interface and implementation, the next best thing seems to be no interface.


If you feel like the only thing preventing you from understanding the security risks of executing a piece of Javascript in Chrome is the parts of the source that are used only in Chrome and not in Chromium, you have a very high opinion of your ability to reason about that many lines of non-memory-safe code that I think is not backed up by the evidence.


No, I think I have a low opinion about my abilities - I need a complete picture and all the help I can get to understand anything at all.

But yes, I agree that it's not as simple as my original comment made it out to be - I definitely failed to fully acknowledge that most/some of the critical parts are open source and secured, but I think the incompleteness in any outside understanding (such as my own) is a major factor in the options Google could present about security.


Don't forget the fact that any scripts you download are by definition open-source (you download the actual script).


> since when is that a reason to disable a user-originated threat vector?

Since Google is operating at a scale where the majority of users use their browser, so a successful popular malicious extension scales extremely rapidly. This is basically a recapitulation of the decisions Microsoft had to make regarding the market dominance of Windows and the ease of attacking users via VBScript exploits. The developers of the framework have to make a trade-off between capabilities of the framework and likelihood that those capabilities will be used to attack the average user. It's a cat-and-mouse game where there are not obvious technical answers so much has move-countermove.

We even have the analog to Apple Computer during the heydays of VBScript exploits in this story... With its much smaller market share, Mozilla has the luxury of worrying less about these exploit vectors because successfully deploying a malicious extension in that ecosystem only impacts a couple percent of total users of the internet. Much as Apple didn't have to worry much about malicious AppleScript exploits.


As someone pointed out above, Tampermonkey has ~10m downloads compared to 2.5b Chrome users, I seriously doubt any one malicious Tampermonkey script can affect as many users as even a malicious AppleScript can.


This design change to manifest V3 isn't about tampermonkey. It's about the capability space of manifest V2 allowing a malicious actor to make something like tampermonkey, have it grow in popularity, and then flip the script under the hood via the extension fetching an arbitrary script from a third-party source and executing it to turn every installed node into a botnet agent without changing any of the code in the extension itself.

Google is trying to solve a similar problem here to the one they have to solve with App Store apps: the ability to make claims about safety an end-user can trust. They can't make those claims if the extension can modify its behavior without any way for the Web Store administrators to know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: