China's surveillance is always so blatant and public, they don't bother trying to hide it like America (which is analogous to political corruption in both countries).
When the artist Ai Weiwei had his email account compromised by the state, they simply logged into his email webmail UI and forwarded a copy of his emails to a 3rd party email address. They didn't even bother intercepting his email at the network or service provider level.
Edit: > "Apple increased the encryption aspects on the phone allegedly to prevent snooping from the NSA. However, this increased encryption would also prevent the Chinese authorities from snooping on Apple user data."
It's a shame articles keep confusing Apple's harddisk encryption with network data encryption. :\
> China's surveillance is always so blatant and public, they don't bother trying to hide it like America.
I'm a little shocked - they've surely got the ability to do a proper MITM. CNNIC is a root CA for plenty of browsers. Saving it up for when they really need it, maybe?
In China, it's very common for websites to ask people to trust their self-issued certificates. If you want to buy train tickets in China, you end up with this page (https://kyfw.12306.cn/otn) which asks you to trust its own cert.
Yes, big companies in china like Alibaba(taobao, alipay) will install their root certification authority(and enable all purposes by default) to your computer without any notification when you install their security control software(it's required if you want use their software).
This is worse than 12306.
This AM electrician comes over, guy in his early 30's (not an old timer) has a new iphone doesn't know how to sync and get the old stuff to the new iphone. Doesn't even know that Apple can help him with that. For computer things relies on his brother in law "the computer guy". Thinks Dell makes great "computers". "Don't they?" he says to me. Doesn't even really understand the difference between Mac OS and Windows. [1]
Point being there are tons of people out there that you could get to do practically anything. And they don't know the difference between one warning dialog box and another. It's just all a mashup to them.
[1] Add: By that I mean isn't aware that there is even a difference more than Coke vs. Pepsi is different.
And the NSA, China, and every other politically motivated actor is actively looking for the blithely unaware 70 year old virologist who happens to work on dual-use agents.
This AM, a software developer comes over to fix my computer he had just bought a new dimmer for his living room lights. Doesn't even realize that you can't use a conventional dimmer with compact fluorescent lights. "They are the same, right?"[1]
[1] Add: By that, I mean he isn't aware of the things he isn't aware of.
Ease up on the geek rhetoric until you walk in his shoes.
Way to miss the point. There is no time where we are expected to understand the subtle differences of dimmers. Users of computers are quite frequently expected to know which operating system they have when following instructions just for operating a computer. They will also encounter certificate errors in day-to-day operations.
They shouldn't be expected to know that though. The problem is that software developers haven't managed to figure that out and just make things work for their customers the way electricians have. Can you imagine if you went to the store to pick up a replacement light bulb and you had to look up whether your house used AC or DC? It's such a basic difference, everyone should know, right?
Here's some history background of the train in China.
(I realized that I have to start from the Hukou policy so that I could tell a reasonable story. Please bear with me.)
TL;DR, This is what a train station looks like before Chinese New Year [1].
Let's start from Hukou policy: Every Chinese is required to register their information to the government and has to provide a permanent address. This looks similar to most other country. But it goes quite far beyond a simple registration. Your Hukou is associated with a permanent address and in many cases, you are only allowed to do many critical things within the city of your permanent address. For example, your child cannot go to the local schools outside their Hukou address. Changing your address on Hukou is very hard and usually happens in some cases: When you go to university, you are allowed to temporarily change your Hukou under the university's city; 2. If you found a job in another city and your employer is willing to help you to relocate your Hukou address. 3. You married with a local person for several years. Basically, you can understand Hukou as a domestic visa. There are two types of Hukou: Farmer Hukou and City Hukou. Basically, they have different benefits/restrictions. Similar to F1 visa, H1B visa, etc.
Well, why I mention this? Here is some history. 30 years ago, major amount of the Chinese population were farmers. To build cities, you have to let those farmers live in the city and do lots of construction works. Due to the Hukou policy, people are not allowed to permanently migrate, esp. changing their Hukou status from Farmer to City. But there's more opportunities in cities and people could make more money. So gradually, there emerges a large group of people whose Hukou address is out of city but work in the cities. Their family has to in their home town, otherwise their children cannot go to school in the cities.
Every year, people works outside their home town will try to go back during Chinese New Year. Since the fact I mentioned above, there's a huge amount of people. They have to take trains (which is cheaper than flight.) Such yearly migration is quite large, ~3.3B tickets in 2014 [0].
Oh, and here is the answer to your question: Go to the train station is really not an option. It's like black Friday, but in a much larger scale. People have to wait outside for even weeks to get a ticket. To some extend, online ticket system helps. However, because of the throughput of the train system is limited, it's still hard to get a ticket.
I agree with everything you've written. But for other readers, would like to clarify that changing Hukou isn't very complex for most cities when purchase of property is made.
Not that buying property may be easy for a migrant worker, but for most cities an 80 square meter property should be enough. Outside of Beijing/Shanghai/Shenzhen that's about a million Yuan.
Just wanted to add some clarification / quantification for a casual reader.
Inside Shenzhen, I'm currently renting an 80 square meter apartment. It cost my landlord 4 million yuan and he and his wife made a 50% down payment. I understand in Beijing it's much, much more expensive. The economic divide in this country is insane.
A million Yuan is $163,000 US dollars, and 80 square meters is 860 square feet. I would imagine that is just about impossible for a migrant worker to manage.
To be fair, I don't personally trust the root CAs that my browsers and OS's trust. There are hundreds of them, from many countries. I think it's a reasonable expectation that at least some are corrupt.
Unless I trust each CA, their processes and every employee who could circumvent them, the current CA infrastructure is inherently unsafe. Self-signed certificates are only marginally less trustworthy (rather than having to compromise a CA, a bad actor would simply have to generate a new certificate and hope that I don't check the fingerprint - and I wouldn't check it).
Yes, there was a very large European root CA that was compromised and was actively being used for MITM attacks except this time the web browser address bar would still "turn green". Which is pretty much as bad as it gets.
Root CAs are not really trustworthy. Manually trusting a self-signed cert is, probably, more secure in the long term. You take control of trust, rather than delegating it out to some faceless corporation who can be corrupted or hacked.
The issue is how to know when the self-signed cert if trustworthy. I agree that the root CA trust system is not the answer, and web of trust doesn't work in practice, but I don't know how we can know if a self-signed cert is trustworthy in the first place. Besides doing out of band fingerprint verification (assuming the sideband isn't also compromised).
That said, I'd be more inclined to trust a self-signed cert of a CA signed one. I don't even know half the CAs that my device trusts, and some I recognise (government ones) I explicitly wouldn't trust.
My understanding is that CAs have been compromised for a while now. Does no one remember the RSA scandal and the NSA's manufactured hash collisions through deliberate injection of vulnerabilities into random number generators? I may be off a bit but I recall the revelations basically concluding the whole system was compromised at the fundamental level.
I too remember something like that, but was under the impression that CAs are still ok.
But of course, judging by the massive downvoting you've gotten, I suppose you're incorrect. I wish those downvoters would explain their viewpoint rather than downvoting...
If they got caught it'd get removed from Firefox and Chrome. I am more surprised that they don't have common Chinese software install a set of MitM CAs.
> I am more surprised that they don't have common Chinese software install a set of MitM CAs.
It said in the article: the most popular Chinese browser Qihoop 360 browser doesn't even giving a warning for the bad SSL cert. "Qihoo’s popular Chinese 360 secure browser is anything but and will load the MITMed page directly."
* The popularity of that browser is way over reported - they tend to report "installed" statistics rather than "used".
* If you read the article you'll see self signed certificates were used for the MitM. From my own research, 360 secure browser just doesn't validate certificates in many circumstances. No CA required.
According to my observation Qihoo Browser was more frequently used by a tech noob than tech savvy. And this alone poses great risk of the general public regardless to the underlying ethnics and interests.
edit: but only in so much as you trust Apple, as they provide and verify the keys. But... assuming you're using an iphone, this isn't really a new threat vector.
This may no longer be true for Chrome at least. They recently added protection against unauthorized configuration changes by third party programs. I'm having trouble determining whether that protection extends to root CAs though.
Even a government will only have the capabibilty to perform meaningful surveillance on a limited number of people not to mention act on it.
Assuming the majority of people desire not to be tortured or killed far more then they desire freedom. It's probably more effective to simply pretend that you're performing surveillance as an intimidation tactic, than to actually perform surveillance unnoticed. It's also far more socially acceptable on an international level than violence.
Even a government will only have the capabibilty to
perform meaningful surveillance on a limited number of
people not to mention act on it.
That claim is currently true, but if you understand Bayes' Theorem and Moore's Law, you know that it's just a matter of time.
But re this:
It's probably more effective to simply pretend that
you're performing surveillance as an intimidation
tactic, than to actually perform surveillance unnoticed.
iirc, former East German Stasi or KGB have said essentially the same thing.
They could also do targeted MITM attacks where only one person is served via the forged certificate. That's harder to detect, and if it's all spycraft, then the target on the receiving end is unlikely to report the breach.
Agree! The real spy agencies can do it smarter - signed with its own root CA, etc.
It might also be other 3rd parties who is doing the MITM.
Anyone can setup a fake free wifi, reroute the DNS system. Some percentage of those "free" service users will press "OK" to let other sniff their "secure" connections.
> China's surveillance is always so blatant and public, they don't bother trying to hide it like America.
I get that America means 'USA' in this context. How exactly do US government officials hide the fact that they keep data of everything possible that happens online outside the United States (and doesn't give a damn about what anyone else thinks about it)?
Given the statements of many US senators, after both Cablegate and Snowden I don't think the US tries to hide anything at all.
> They didn't even bother intercepting his email at the network or service provider level.
Is this a known fact? If I wanted to spy on a prominent dissident [1], I would use a variety of methods. Some would be intentionally crude, so that the target feels safer after noticing and defeating them, making the more sophisticated approaches that much more effective.
[1] I recommend visiting the current Ai Weiwei exhibit in San Francisco. It's quite good.
It is hard to tell whether it always is. If they are smart they use a deterrent, which must be visible to work, to decrease the number of potential targets to track, and something well hidden to follow the remaining real/potential troublemakers.
"blatant and public", yes I agree with that, but I think this incidence also shows more of a level of incompetence: as owenmarshall pointed out, they have all the means of doing it "properly", but gets caught red handed instead...
I've done some analysis on 360 secure browser's SSL handling in the past. I don't have my notes handy, but it can easily be taken advantage of by anyone, not just the Chinese government. I'm somewhat confused by this, as it would not be difficult to just bundle MitM CAs with this browser.
It's also not as popular as frequently reported. It is widely installed because many orgs are required to have the security software that bundles it, but when I was researching it the consensus I got from several Chinese people was that few people actually used it - "only old people who don't know computers use it".
I just downloaded 360 secure browser (into a VM, I'm not crazy :)) and checked. It does display a warning but still loads the website immediately. Also, I was unable to find a way to display the certificate.
Yes, that is true, it's a limited sample size, but it agrees with what I've seen from various sites that measure browser popularity. Having better numbers would be good.
"They should also enable two-step verification for their iCloud accounts. This will protect iCloud accounts from attackers even if the account password is compromised."
I wonder if 2FA is really that safe in a country like that, they have all the means to intercept the second channel, it just requires knowledge about the account owner or some not to complex synchronization to detect auth codes sent via text messages.
2FA doesn't normally send a password though, does it? Isn't it typically a one-time key?
The risk here is someone could intercept and set/reset that password, but the end user would immediately know that as they would be unable to set a password or login.
With app- or token-based 2FA, there is no token sent across the wire until the user submits an auth request. With SMS-based 2FA, the token is sent via insecure channels to the user BEFORE auth, which is an opportunity for state actors and telecom to intercept it before use.
If somebody can MTM both SMS and your internet connection, you wouldn't know a thing. That one use would be done by the MTM and not you and thanks to the MTM everything would appear normal to you.
(MTM is not only the "man in the middle" but also "the machine in the middle" -- a computer which would react in real time.)
Yes, you would know, because the token exists to establish a password and an active connection. If someone MTM the 2nd part of a 2FA, you won't be able to log into your account.
MTM your internet connection means whatever the service is you have to replicate exactly, which is not feasible at any scale.
MTM is the point where your active connection ends. It uses the token to connect with the server (which it knows since it MTMs the SMS too), you comunicate with it, seeing the copy of what MTM receives. You don't see anything strange. It's not that hard to implement.
Again, this is confusing the MTM attack we're (and most people are) talking about in general with the 2FA mechanism we're specifically talking about here.
They're two entirely different vectors. If you wanted to hijack a token as part of a 2FA, that serves the purposes of initializing an account. In this case, the second MTM (intercepting communications) will not work, because the user will be unable to log in (as you initialized their account and therefor had to set a password).
Further, in the traditional MTM attack, there's no need to steal that 2FA token in the first place - not only because it prevents an active, working account, but because you can already get the information you want through data interception.
> In this case, the second MTM (intercepting communications) will not work, because the user will be unable to log in (as you initialized their account and therefor had to set a password).
OK, once again: both SMS and internet are MTMed. Now why can't the machine doing the internet MTM use the user's password? Why do you think it has to do that before the user inputs it?
What you're describing is so inordinately complicated that it really could only be used for specific targeting. Meanwhile it's so redundant it would be a waste of everyone's time.
In this case, we're talking about a MTM on SMS and Internet. When a token is sent via SMS, we intercept that and use it to initialize an account (this is totally superfluous since we already have the MTM on the Internet, but for the sake of this argument, let's go with it).
Now when a user logs in, we know our generated password, so we need to eschew user input and supplant it with the password we generated by intercepting the 2FA, then return the response as expected.
You can sort of understand what I'm saying here. If you have MTM on the network side, you don't need to bother on the SMS side, it provides no advantage. Meanwhile, if you have MTM solely on the SMS side, there's no way to do this without alerting a user, because they will be unable to log in anyway.
"The man-in-the-middle attack (often abbreviated MITM, MitM, MIM, MiM, MITMA) in cryptography and computer security is a form of active eavesdropping in which the attacker makes independent connections with the victims and relays messages between them, making them believe that they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The attacker must be able to intercept all messages going between the two victims and inject new ones, which is straightforward in many circumstances."
At the moment we write this on the fist page of HN there is an article about the MTM currently used by Chinese:
Where they don't even bother to remain undetected. I didn't think it would be much more complicated for the government which already does MTM to do the real-time query through the victim's SMS. The queries of the SMS would happen very rarely and need much less resources compared to the internet traffic.
Reading the conversation before I joined the discussion, I've also believed that worklogin also worried about the MTM as he wrote, many messages before: "With SMS-based 2FA, the token is sent via insecure channels to the user BEFORE auth, which is an opportunity for state actors and telecom to intercept it before use." I believed he wouldn't need to discuss when the token is sent otherwise and that his "intercept" was in the MTM-implied meaning "to take, seize, or halt (someone or something on the way from one place to another); cut off from an intended destination:
to intercept a messenger."
So, per your discussion, the existence of 2FA is irrelevant as soon as the internet is MTM-ed, that is, as soon as somebody plugs in your TLS session? I honestly have never considered how irrelevant it is then. Thanks. I still have some other view of the 2FA ultimate goals.
Like, there is this: https://news.ycombinator.com/item?id=8487115 on the first page now where Google exactly tries to avoid the SMS channel. I can imagine that you'd consider that more than 2FA which you discuss, it's also OK.
Just to summarize, I entered because the concern was about cleartext passwords being sent via SMS. This is almost never the case, typically it's a token.
As a mitigation tactic for full MTM, 2FA is basically without teeth. Same applies for the new item Google is trumpeting.
I'm not saying someone couldn't do a MTM with both SMS and full network, I'm saying it's overkill and redundant. There's no advantage. If you're already funneling the data, you have what you want.
> As a mitigation tactic for full MTM, 2FA is basically without teeth.
What I believe is that a mechanism can definitely be made where without having the second authentication item of the 2FA the MTM on the internet channel would be automatically deactivated (or the user would recognize the existence of the MTM since it would reject the connection). Namely, the MTMI (internet) point from my scenario wouldn't be able to keep the connection to the server active since it doesn't have the key which is needed to even have the (encrypted) communication: the key given from the entity with the server to the user, but not using the internet, and therefore impossible to be used by the MTM interceptor. I believe such a mechanism can be made as soon as we assume the existence of such a key. That some simpler forms are currently still more popular doesn't mean we should dismiss the properly implemented mechanism as "without teeth." Maybe you'd say that such a mechanism isn't 2FA at all. How would you call it?
Those tokens are rarely actually one-time-use; They're more commonly time-limited (and you still have cache invalidation problems even if they're supposed to be one-time). Further, they fail to deliver so often that it's not hard to intercept one and use it, forcing the user to retry.
That's interesting because most of the ones I've seen work like this:
- user created, 2FA token created, user not active
- user gets 2FA token via 2nd channel
- user enters token, gets to create a password for account
- user active, token invalidated
this isn't airtight - nothing is - but it means either you got the token or you can't log in to your account, which should raise a red flag.
I suspect that anything that a user could be reasonably expected to perform wouldn't provide enough additional security to make the additional effort worthwhile.
The general assumption in cryptography that the algorithm used for encryption is known. Even if you relax that restriction and assume you know which algorithms a user can choose but not which one she chose, you could still just try all of them, if they are "not too complicated".
If 3 login attempts invalidate the OTP then you only need a manageable number of different "known modifications" to tell the user through the secure channel to keep this safe. If they can arbitrary brute force you OTP anyway then the OTP isn't really going to be all that useful at 10k possibilities.
That said, hardly any user would be willing to take on such complexity without very strong reason.
A good MITM attack intercepts your second factor when you enter it, just as it intercepts your password. Then the attacker just signs in before your second factor expires.
What's the problem with 2FA with SMS? I'm using it and it's good. They send signed SMS with nonce, I'll verify the signature with their public key, and sign it with my private key, then send the signed nonce back. Then the web interface tells me that now I have successfully signed their request nonce XXX, and they'll be forwarding my login token to authority X. So it's not so easy to tamper with properly made mobile SMS 2FA. My phone never receives the actual login token, nor the service X get's it unless I'll also verify the login request in browser. Of course before all this happens, I also would have given preliminary login information like username and password.
Does this only occur when the user logs into iCloud using the web, or does it happen on the device as well?
Does anyone know if iOS uses certificate pinning when connecting to iCloud services, and if so if that is sufficient to prevent against this type of attack?
It's a classic MITM which includes switching of the SSL certificate. In regular browsers the user would need either to confirm that they know what they are doing (Firefox) or not get to the page at all (Chrome).
I'm not an iOS dev, but I do not think that the iOS SDK would allow for invalid certificates. Then Apple could just go ahead and not use any encryption at all.
The 'hack' in the article works, because users ignore security warnings or even use a browser that is clearly made to easily snoop on people.
iCloud is not the only victim here. Google's IPv6 access has been suffering the same attack since September. (IPv4 access has been blocked entirely for 5 months)
It's not a shocking news, however. Apple has already moved [1] some of its storage servers to Beijing. The attack could just be the authorities making sure that Chinese users' iCloud data is actually stored in China.
So China is double-dipping? I was hoping that post-Snowden, this kind of request from some countries that companies need to store data locally, to make sure the data isn't taken by the US government, would encourage companies to encrypt the data end-to-end (client-side), before they get it into their clouds. Then nobody could complain about the data not being safe from the US government. It should be safe since even the company shouldn't have access to it.
I realize this isn't the real reason why China told Apple to build a datacenter there, but that's the one they used publicly, and as long as the company itself can get access to that data, then the argument is a pretty plausible one, even from China. Apple, Google and others could weaken this argument by adopting end-to-end encryption for their services.
Unfortunately, it seems the companies decided to keep the data as is, but build the data centers in Russia, China and wherever else they might ask them to do it.
Apple implemented not-exactly-end-to-end encryption on phones and the FBI publicly complained. Implementing effective encryption would most likely result in threats of a ban by the Chinese government. See http://www.wired.co.uk/news/archive/2013-07/11/blackberry-in...
Ultimately there's only so far you can go against the wishes of the Chinese government when your factories are there, or against the US government when your HQ is there.
HSTS is mainly to prevent SSL-stripping. But I think part of HSTS could also note that the certificate was trusted, and then having an HSTS header could entirely prevent any later connection with the self-signed certificate, without clearing the HSTS history.
You may not need to even store the extra bit, it's enough to say if you have HSTS then by default the connection must not just be encrypted, but it must be trusted.
Do current browsers entirely prevent a connection to untrusted certs when HSTS is set? Or is it just the same error you get when connecting to any self-signed cert?
> Do current browsers entirely prevent a connection to untrusted certs when HSTS is set?
Yes. HSTS would not do much to prevent active MiTM. HSTS just tells the browser that it should only connect to the site over HTTPS. It does not mention which certificates are trusted.
It seems like you are hinting towards certificate pinning (https://en.wikipedia.org/wiki/Transport_Layer_Security#Certi...). Pinning would prevent rouge CA's from signing bad certificates, but pinning is hard to do on the web. It is mainly used with mobile applications from what I have seen.
So Cert Pinning won't help in this case? Or may be not doing stuff like cert pinning is one of many (may be even lawful) requisites of doing business in China?
You don't need certificate pinning to prevent this issue, you just need a modern browser with up-to-date trusted certificates. Certificate pinning would only help if the certificate the government is presenting matched the site's URL.
> Certificate pinning would only help if the certificate the government is presenting matched the site's URL.
And what prevents the government from doing that? Certificate pinning will address MITM no matter what - if the certificate the browser receives is not the one it pinned, it will refuse to connect even if the cert was signed by another trusted authority.
Although it's unclear from the article as to what really is happening - is it that Apple trusts whatever Chinese CA is used to forge the certificate for iCloud.com but others like Mozilla and Google don't? In any case I don't see how pinning won't help here.
No, nobody trusts this certificate - it I'd identical to the one you generate yourself with OpenSSL. Certificate pinning would be nice but its simply not the issue or fix at hand here...
If China were to misuse the root I believe their academics dept has, it would be instantly banned. There was a bugzilla bug about removing it @ Mozilla and a LOT of people supported it, but it won't be removed unless there is abuse.
I've been using proxy.sh (despite the name, a regular VPN provider) for a while and have no complaints. They have endpoints in a large number of countries.
Just setup your own, it's not so many steps actually. I've setup VPN servers for L2TP/IPSec, OpenVPN and even PPTP. Setup takes 10-30 mins. Just spin up a instance on AWS/Rackspace/whatever and install the apps, then connect to it. Works great.
Assuming you just need it while traveling, running a VPN server at home is a cheap and effective option. In particular, both DD-WRT and RouterOS have OpenVPN support.
TL;DR - China likes to spy on everyones data and can do so, because they own their country. This incident is on iCloud, but is only in alignment with their greater strategy and not Apples fault.
Of course, just as America likes to spy on everyone as well. But let's not completely pardon Apple, they can always do something more to strengthen their systems to be more and more resilient to these attacks. It might well be a cat and mouse game, but they should at least try and play rather than just give up. They're sitting on hundreds of billions of dollars, they get hundreds from many of us, the least they can do is look out for us a little more.
I'm not saying security is not an important issue. Inherently it is.
The problem is that Apple is using the industry standard for encryption here (SSL). China cracks that security by giving their folks a browser that allows them to easily swap the certificate out and send all the data to them before sending it to Apple. This is called a MITM (Man In The Middle Attack).
Personally, I'm a big fan of privacy - also I'm the CTO of a web-company, so I'm concerned with security for webapps, too.
When users are ignore warnings of their browsers (what Firefox would do in such an event) or even install a "trojan browser" by a mean government - well, then there is little you can do as a company.
Just wanted to give a short TL;DR on the article to prevent an icloud shitstorm on HN, because the article is really on how mean China is. Not saying or implying at all that other governments are better.
Also not saying that Apple is perfect in terms of security. Btw, as developer and sysadmin I'm using Debian stable - my Mac is for convenience and productivity. Just saying that to disqualify myself as the regular fanboy(;
Thank you for your feedback. I apologise, because I came over as cocky.
I did make that statement to emphasise that the content has some base; I explicitly didn't mention which company or any other credentials on my behalf. I could have written that I'm a developer, but it wouldn't have been true, because most of my time is spend in other areas. I neither wanted to insult or boast, so my sincere apologies since it came off that way.
Act like what? Why should he conceal his position? This comment makes absolutely no sense. If anything, with this comment, you're projecting your own feelings onto preek's comment and trying to hold him accountable for that.
Being a CTO isn't a bad thing and there's no reason to insist he conceal it because it makes you feel bad.
What do you expect Apple to do in this case? What action can they take against a government-performed MITM?
Seriously "they've gotta do something" sounds good but I'm not sure what options they actually have. If the user clicks through a clear SSL warning, that's Apple's fault?
Half our industry is built on fooling users. Exploiting their cognitive biases. Seriously, we understand where they fall, and if we really wanted to, we absolutely could look out for them -- at least, certainly better than we're doing right now.
What can Apple do? They could make a better browser (I like how Chrome and Firefox do things - you have to go out of your way to reach a page with bad SSL -- compare Safari's rather passive and enabling error message: http://blog.serverdensity.com/wp-content/uploads/2009/05/ssl... vs. chrome's: http://i.imgur.com/ttmmDJ8.png -- you have to REALLY see and think how to access the site despite the warning, it's that good). They could be more vigilant in alerting users of where and how this can happen.
If we were talking about any other young startup, your apology might fly -- not so with Apple, they're sitting on billions, they have the resources to think of a solution and implement it.
Interestingly, up to now the only time I've seen an SSL certificate warning was a misconfigured server. This is the first that I've seen an attack throw up a cert. error (usually attacks leverage other avenues that don't alarm users). Microsoft research even confirms:
"It’s hard to blame users for not being interested in SSL and certificates when (as far as we can determine) 100% of all certificate errors seen by users are false positives."*
Dang, didn't realize that the formatting cuts off the quote, here it is in full:
"It’s hard to blame users for not being interested in SSL and certificates when (as far as we can determine) 100% of all certificate errors seen by users are false positives."
I'll buy the argument that the industry has a duty to protect users, and also that Safari could be designed to better warn about SSL.
> Your response is like saying when Facebook was privacy zuckering, the user clicks right through the settings that should have sounded an alarm.
This is a bit odd, though. On one hand we have a company directly attempting to trick users; on the other, we have a company whose product is being attacked by a hostile government. Drawing an equivalence between the two is a bit ridiculous, no?
The comparison was between a company who's web UI tricked unsuspecting/naive users into revealing private info to third parties, and a company who's browser UI is making it all too easy for unsuspecting/naive users to inadvertently reveal private info to third parties.
I think the two are quite comparable. In both cases, the software developer should be responsible for guiding the user to make the right decision.
Yes that was probably not a very good analogy. I was trying to highlight the fact that we're really good at getting users to do what we want (things, specifically, that hurt them and make us more money). So far, we (the tech industry) have put a lot of effort into tricking them to do what we want, now maybe it's time to trick them for their own benefit, rather than ours.
This attack will come as a surprise to Apple. In the past, the company has had a bromance with the authorities and have blindly acquiesced when asked to remove apps from the China app store.
When the artist Ai Weiwei had his email account compromised by the state, they simply logged into his email webmail UI and forwarded a copy of his emails to a 3rd party email address. They didn't even bother intercepting his email at the network or service provider level.
Edit: > "Apple increased the encryption aspects on the phone allegedly to prevent snooping from the NSA. However, this increased encryption would also prevent the Chinese authorities from snooping on Apple user data."
It's a shame articles keep confusing Apple's harddisk encryption with network data encryption. :\