Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook says a bug caused its iPhone app’s inadvertent camera access (techcrunch.com)
168 points by Liriel on Nov 13, 2019 | hide | past | favorite | 136 comments


> “I guess it does say something when Facebook trust has eroded so badly that it will not get the benefit of the doubt when people see such a bug,” said Strafach.


Exactly. That what happens when you are repeatedly caught lying and then reversing your statements only after you were caught and then pretending nothing happened.


Sadly, it seems to be the norm in business and society these days. It all started when people started espousing "it is better to ask for forgiveness than to ask for permission" nonsense, which I guess is one of the pillars that made Silicon Valley what it is today. And now it's a race to the bottom as everybody now needs to follow the same philosophy to compete.


In a way it is the same thinking that speeding drivers use. They need to get somewhere in a hurry and it is too much to be bothered and slowed down by being careful. Traffic rules as a source of nuisance. Then when an accident happens or get caught by police they will blame bad luck or everybody else.

Thinking long term (playing the infinite game) is so overrated...


Make me think of "tax are bad" people.


And this is the problem with losing trust.

It's perfectly plausible that this could have been a bug. It could also have been some type of performance enhancement, e.g. initialize the camera in the background so you can begin a video call more quickly.

But do you believe them? Facebook has scourged any goodwill I ever had for them.


but why when scrolling instead of when the app is initialized?


None of these “bugs” ever seem to result in accidentally accessing less data. How odd.


What is this comment even trying to say?

There are plenty of bugs that result in less data... many of them are called crashes and FB has them just like any other app.


ELI5:

It's a pithy sardonic addition to the chorus of scorn rightly directed at Facebook, rearticulating the consensus opinion that one consequence of their persistent malfeasance and its regular public disclosure is a profound loss of trust, such that genuine mistakes will never be perceived as such; and furthermore, this creates a self-perpetuating cycle in which embarrassing true bugs are interleaved with further documentation of bad faith in such a way that the distinction is not only lost but essentially irrelevant.


ELI5 of my comment:

The comment I replied to is not nearly as pithy as it thinks and comes across as more corny than commentary.


Better than I could have explained it myself.


If a bug resulted on you being unable to post to Facebook sometimes, or you posting to your friends only what you meant to post to everybody (the setting called "public"), why would the mass media cover it?


Mass media does cover outages.


Broken logging happens all the time, but there aren’t articles about it


I have been going back and forth on this heavily, but two weeks ago I reached out to ISC2 and ISACA’s ethics boards to begin exploring the process of terminating the certifications of several FB employees that potentially (let me emphasize 110% blameless until proven otherwise) have been in violation of the ethics code for both of those orgs.

I asked ISC2 in particular that if a certain employee was shown to be on XYZ team that handled XYZ task, if they had say ignored multiple outreaches from the community, and then also (this is key) not raised concerns internally to a reasonable degree that a single individual can in a megacorp...could their say CISSP be revoked. The answer I got was a strong yes in the hypothetical stories I proposed.

I also emphasized that I would be writing an apology to said employee, and offer to resign / terminate my CISSP/CCSP if wrong because we have to prevent this process from becoming a no-lose vindictive game.


I’ve long suspected this as the Facebook app in particular causes the phone to get hot around where the main camera module ic is located. It is also a battery drain. They appear to be doing this in collaboration with Apple as the battery usage for the camera does not increase despite the camera constantly being initiated while using the Facebook app.

I’ve also noticed that when certain ad platforms inject their code into Safari taboola/outbrain the same heating occurs. This would make sense as taboola and outbrain appear to be linked to Israeli intelligence.

I guess they not only don’t care about our privacy they also don’t care about our battery life / health of our devices either.

It sort of reminds me of how windows would spin up your HDD In the middle of the night and scan through your entire hard drive causing it to prematurely fail. All under the guise of windows update or whatever the cover was for what I consider criminal damage.


> They appear to be doing this in collaboration with Apple as the battery usage for the camera does not increase despite the camera constantly being initiated while using the Facebook app.

I’m sure any battery usage from this would just be listed under Facebook.


Right, battery usage is reported by app not by hardware function. It seems unlikely Apple would be willfully letting Facebook have special capabilities.


Not too long ago, the idea of the NSA & GCHQ working together to silently strong-arm tech giants to spy on their own citizens/customers was a laughable conspiracy theory.

Then we learned about PRISM. Hard proof that Apple, Facebook, Google, and the NSA secretly collaborate to mass-collect data on US citizens. It's 100% reasonable (and perhaps in our best interest) to assume any large tech company will and does cater to the NSA/CIA/GCHQ/Mossad's interests before their customers' - or even their shareholders'. Revenue and market share are important, but a powerful agency threatening your company takes priority over anything.

Until proven innocent, we should assume that any and every large tech-centric corporation either has been or can be forced to cooperate with PRISM. IMHO now that the public has access to the Snowden leaks and Vault 7, it is irresponsible (and perhaps dangerous) to think otherwise.


Yes, all that is true, and why I used the word "willful" in my comment.


It's very telling when the CEO of the company puts stickers on his cameras. Bug or not, I guess it's better to follow suit with those that know what's going on. https://simplecapacity.com/wp-content/uploads/2016/07/Mark-Z...


Putting stickers over your cameras might be a reasonable thing to do, but it's pretty silly to pretend your threat model is the same as Mark Zuckerberg's.


I keep reading similar statements on HN. Are we honestly saying that because of Zuck's "stature", there are more people actively attacking him than people taking access of an attack vector to gain access to as many people/devices as possible? Sure, some hacker might get some cred points for being the one to hack the Zuck, but ultimately these guys are after financial gains. This seems to be a prime example of quantity over quality being the better option.


If I was a billionaire I would pay security experts to make sure my workspaces were secure, things like locking down USB ports and blocking video/audio inputs that aren’t used would be a completely expected part of that. Still rich with irony though.


I’d be almost certain that there are more people looking to crack exceedingly high value targets like politicians, and billionaires than there are me.

There’s also people looking to crack huge swathes of the population, but state level actors seem more interested in breaking key individuals for blackmail, and espionage.

Although some state level actors are interested in mass surveillance as well.

Tl;dr: yes, Zuckerberg has more to worry about with his devices than the average individual.


It easy, nowadays, to do anything and say it's a bug. Easy to escape that way than getting your ass kicked by loads of lawsuits.


Same exact thing with the Robinhood infinite leverage. People tried to defend them saying “robinhood isn’t responsible because it’s just a bug”. Being ignorant of FINRA regulations and refusing to fix it when it has come up multiple times is not a bug. Not every fuckup is “a bug”. Calling it a bug does not release you from all liability.


What's concerning is if FB could potentially target users based upon interests/variety of other user data to "introduce" these types of "bugs".

Not trying to start a conspiracy, but given the data and reach FB has, they could potentially target unknowing/non-techy people somewhere in the world and do it without their knowledge/care.


Booting the camera so it is quickly available? Sure, that is a user-facing benefit.

My immediate hunch is rather that they love tracking emotional responses such as widening eyes as people interact with content. Because knowing the true emotional responses instead of just hitting "like" on a post would be extremely valuable data.


It's the rear camera that's activated...

https://twitter.com/neo_qa/status/1190639141979140097


Each time a “bug” like this surfaces they should be massively fined.


> Facebook vice president of integrity Guy Rosen

The same Guy Rosen who was the CEO of Onvao - the infamous data grab app that Facebook purchased.... yeah, that integrity guy...


"The Ministry of Peace concerns itself with war, the Ministry of Truth with lies, the Ministry of Love with torture and the Ministry of Plenty with starvation. These contradictions are not accidental, nor do they result from ordinary hypocrisy: they are deliberate exercises in doublethink."

https://en.wikipedia.org/wiki/Ministries_of_Nineteen_Eighty-...


WTF does a VP of integrity do? Have lunches with the VP of honesty?


Integrity teams at FB are teams that handle abuse on the platform, and other bad things (fake accounts, spam, malicious developers, malicious ads, ...). Why integrity? Bunch of teams (not all) that did abuse had integrity in their name (like site integrity, platform integrity), so I guess that the name just sticks.

And what does VP of integrity do? Manages organization that handles abuse on the platform.


I guess VP of Abuse is just not a good look


well, VP of Abuse is exactly as good as VP of Spam. You know, it's not clear, whether you are trying to to reduce abuse/spam, or create it.


To be fair it’s often not clear whether Facebook are trying to reduce integrity, or create it. :)


They mostly just avoid talking to the VP of Truth.


Nah, I would think he's their friend as his role is likely to mostly involve promoting "company culture" in all-hands meetings and via the corporate blog.

It's facts that they all avoid or diligently rework to fit their version of the truth.


You know how banks have someone who's job title is something like "Head Of Fraud"? Who's role is to work out what a "reasonable" allowable amount of fraud is, small enough that it can be de-risked and budgeted for, but not so small it becomes "more expensive that it's worth" to fight it? An then to put processes in place to monitor fraud and ensure no more fraud occurs that planned for?

That's what _I'm_ imagining a "VP of Integrity" at Facebook does.


I asked the VP of absurdity, and he told me to go ask the VP of Kafkaesque nightmares.


And the head of the Unity Division.


Israeli Companies in particular are hell bent on acquiring ‘big data’ on the populations of western nations.


Well you can have integrity in form of being consistently evil doer. Hitler showed some integrity with his treatment of jews/slavs/dissent. Facebook has consistently shown integrity when it comes to spying on users, lying to everybody, and generally being a douchebag of a company


* VP of Tegridy


i'm no fan of FB, and do not have their apps installed. however, the rationale is that they initialize the camera so there is little delay when the user actually wants to use the camera, but the skeptics think the camera being initiated has a more nefarious purpose. scientific method would suggest these are testable theories to prove/disprove. could we not test the network traffic to see if a sudden spike in outbound data flows due to the camera data being sent to the mothership? is it possible that the app can analyze the video content locally on the device without needing to stream the video content back to the mothership? users with limited data plans would be destroyed by a constant "live" stream of video, but i am not hearing of these complaints in the wild.


No idea mate but what I've noticed is that when I had the facebook app installed it used to completely trash my battery on every android device I tried it on while running in the background. Imo there must be some kind of processing that they are doing locally.


Would energizing the camera chip the entire time you are using FB apps not explain the battery drain by itself?


As far as the older androids I'm talking about I'm pretty sure that they didn't have the camera open 24/7 because the battery drain would have been even higher( based on my perception when taking a lot of photos), plus the camera used to overheat so I would have noticed. What I'm saying is that they are probably datamining and processing something. Not sure what that would be, but android at least is pretty pervasive with it's app permissions and used to be even more so so it could be anything on the phone really.



Ah, yes, a bug, how unfortunate.


Perhaps the bug was showing the camera image?


How would the app even acces the camera without user approval? They must have done some nefarious shit to get around apple’s ACLs.


This only works when the app has already been granted camera access.


I noticed that the FB app is very dodgy. Now, I use a shortcut to FB in the web browser to check on things on my android.


Why open source matters.


How so?


You can look at the code to see what it's actually doing.


People rarely look at the code even when its open source


> People rarely look at the code even when its open source

So? The point is people can if they want too. Even if only w small portion do, it's better than having closed sources code.


Not only that, even if you have reproducible builds, they're a lot harder to verify on mobile targets where you can't just compare all the installed files.


What am I missing? The binaries are the same. I dont see why paths matter. One can hash and intersect the sets.


Color me unsurprised.


Good time to remind that Facebook has previously intentionally bypassed permission dialogs to gather data without user permission:

https://assets.documentcloud.org/documents/5433555/Note-by-C...


Also nice reminder about that lovely photo of Zuck with a sticker covering up his webcam. Now why would he do something like that ... ?


Because he's much more likely to be targeted than any of us for things like that?


“ALL ANIMALS ARE EQUAL / BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS.”


Not sure who downvoted you. But I agree 100% with this, he is a high value target.


I feel like a high value target to me, though. If the environment is untrustworthy enough for Zuckerberg to defend himself against it, he should consider his own role in making it that way.


But the truth is you should always treat your environment as untrustworthy. I don't think you should deduct him points for blocking his webcam.

It is an ADDITIONAL privacy step. He runs a company with thousands of employees, and with any company like that you never know if someone may decide to become a bad actor, and physically compromise him.

But yeah lets continue to hate on facebook.


> But yeah lets continue to hate on facebook.

Yes, because they're voracious collectors people's personal information. We can hate on "bad actors" as well, we don't have to choose one or the other. But as long as Facebook is behaving creepily, whether deliberately or inadvertently, let's hate on them. There is no reason to tolerate their mistakes.


[flagged]


Nothing is preventing you from putting a sticker over your webcam, just like Zuck does.


Or I can close my eyes and imagine that nothing happened

Result will be the same

Stickers do not work - because people around you do not put it too magically

And what even sillier - that someone have phone with installed fb app in 2k19


Why are you concerned if someone else does/doesn't do something? You can only control your own actions. Not protecting yourself by covering your webcam because the majority of others do not is not sound logic. I have a bright pink piece of tape over mine, and it is always a conversation starter. I have seen later that many people do wind up covering their webcam. It's not a 100% conversion rate, but it is definitely > 0%.


Why someone want defend corporations? Especially those which breaking law

What motivations of such people?


Who here is defending "corporations" (by which I assume you mean corporate bad behavior)? Do you think Facebook and their ilk are the only actors who might want to hijack the cameras on your devices?


There is no good corporate behavior

>Who here is defending "corporations"

dylan604 try to change focus of attention.

This is type of manipulation. And in this context, this is defence

>Do you think Facebook and their ilk are the only actors who might want to hijack the cameras on your devices?

Nope. But for some reason fb can't do QA. At all

And this do most of the work for other "actors"

Why this happens?


wait, what? i am not defending evil corp at all. IMO, my comment about covering the camera on your device is purely self-defense because evil corp can't be trusted. how you can twist my self-defense suggestion as defending evil corp's decision to nefariously use the webcam is some serious bending of logic


> dylan604 try to change focus of attention.

> This is type of manipulation. And in this context, this is defence


>Stickers do not work

They do though.


They do if everyone put it

Do everyone put it?


No one else has a computer in my bedroom


So your neighbors with hires full spectrum light camera pointed at your windows does not count

Mhhh, okay


If I were Mark Zuckerberg, the sticker is the solution to my laptop camera, and buying up all of the adjacent houses is the solution to neighbors

http://money.com/money/4346766/mark-zuckerberg-houses/


But you are not him

And you still need to control your data

And you can't do that now effectively

And if people will defend corporations and hide its fu we will newer live in better world


This is an intentional misreading of the parent comment and does not belong on this forum. Zuck being a high value target for extortion/exploitation/attack is an objective fact. Saying so doesn't imply that anyone else is "low value" - attackers will attack whoever and whatever they can to achieve their goal, which is almost always money.


If it is misleading then first priority of fb is defending privacy and user data at all costs.

Everyone, not just one person

Nice


Other than not using them themselves (which, just to be clear, they absolutely should not be doing), how is Facebook supposed to protect everyone from remote camera takeovers?


Do QA

Do a security properly

Do a code for security

Do QA properly

Do not hide thing like Cambridge Analytica


In a building he owns, on a network he owns, defended by an army of network engineers he employs?

No, he’s not protecting himself from any external threat.


This is common practice among US government employees as well. The fact that Zuck does it as well doesn't say anything about Facebook's practices WRT their users.


This is common practice among US government employees as well

Sure, but they are protecting themselves from the Russian hackers that are in all the US government networks. Zuck is protecting himself from Facebook itself.


Couldn't he just tell that army to add an exception for his account?


What about taking the laptop outside of the office? Doesn’t seem like a crazy thing to do.


Ever heard about 0days?


I do it. My wife does it. My sister does it. The cameras at Bloomberg all had shutters, and I used it there as well. It's just good (paranoid?) OPSEC to defend against remote takeover exploits.


How does tape over a camera prevent remote takeover specifically?


It doesn't, obviously, and I never said it did. It prevents the camera from being used to surreptitiously record useful information when a machine has been remotely compromised. It's the same reason nothing with a transmitter is permitted in classified areas without specific authorization. Preventing a remote takeover is effectively impossible, but these steps reduce the usefulness of such an action (which is part of the defense against them).


> > > It's just good (paranoid?) OPSEC to defend against remote takeover exploits.

> > > [Camera shutters are] just good (paranoid?) OPSEC to defend against remote takeover exploits.

> > > [Camera shutters are] ... good ... [defense] against remote takeover exploits.

Clearly I misunderstood your intent, but the comment does seem to indicate what I thought.


Nothing there says or indicates anything about preventing takeovers. Preventing is a strict subset of defending against.


Physical controls can be as effective as technical.


[flagged]


Could you please stop posting unsubstantive and/or flamebaity comments to Hacker News?

https://news.ycombinator.com/newsguidelines.html


I absolutely hate this sentiment.

Technical people know about this stuff, and designed it to trick people who don't know about it into thinking it's benign. But sure blame the user. Seems to be a favored go-to for Facebook.


Turning off one's microphone is not so uniformly easy that failing to do so justifies the epithet "stupid." Sometimes you can only deny permission to use the microphone, not turn it off entirely. Sometimes permissions are reset in an update. With several devices on different platforms, it can be hard to keep track. And then, others around you may have their microphones on, and voice recognition seems to work.


Do you realize that every microphone I have is basically embedded in the devices? I can't turn off -temporarily- my laptop's microphone or my phone's.


This is arrogance and it shouldn't belong here.


I also noticed they've started resetting the notification settings I set through the Android settings menu for Messenger. I didn't know this was supposed to be possible.


I think the developer can change the notification channel ID (which they use to group similar notifications and the user can disable a specific channel) and then you basically start with a channel without previous settings. You would have to disable all notifications for an app completely (not just a specific type) to prevent this for the future.


Or, you know, delete the demonstrated-untrustworthy app from your device...


Thats what bothers me the most about my Samsung phone. Facebook came preinstalled and I cannot uninstall it. I can only deactivate (whatever that means).

I don't use facebook, I actually never did. Its not the fact that I'm loosing a few MBs of storage that really bothers me, it's the fact that this is the facebook app.


You can uninstall it using ADB, even without rooting. Not a consumer-friendly solution but I’m throwing that out there in case you want to try it.


I think I'll try that. I guess that'll work for LinkedIn/Office as well? However, not sure if the BYOD policy will like that.


I don't have it on my phone and I don't see a reason for a native app for facebook. The mobile website works good enough if I need to check something (sadly it's heavily used also by companies and newspapers in my country) and messenger works through https://mbasic.facebook.com/messages/


That's actually what I ended up doing to 'fix' it. It's disappointing, yet not surprising, to learn it's intentional subversion of the API.


How does one "bypass a permission dialog"? What's the permission dialog for, if it can be "bypassed"?


By finding a bug or oversight in how the dialog was implemented.


> Good time to remind that Facebook has previously intentionally bypassed permission dialogs to gather data without user permission:

I wonder if the camera app leaves some by-product that the Facebook app can exploit to derive some data that the user would not usually give to FB.

Perhaps it initializes the GPS without being prompted (as camera uses it for geotagging). Or maybe it checks for the time required to enable the camera, comparing with previous attempts in a kind of A/B test, so it can know if the camera was being used for another app?


Once again, my paranoia about never using FB from my phone is retroactively justified.


Of course it did...


Is not bug, people worked hard on this feature.


Bugs are what you get when you move fast.


Such bullshit.

They definitely initialize the camera in the background. That is how you are able to simply swipe to your camera view and it is immediately active. Otherwise you would get a slight delay while the camera is initialised.

What they are calling a bug is that the user actually saw it it rendered to a view...

Have no doubt - this happens on ALL apps that dont have an apparent delay on switching to a camera view. That facebook are saying this is a bug that they have "fixed", rather than admitting that they intentionally initialize the camera in the background makes me concerned as to why they would want to hide that fact...


Looking at the videos, it looks like the camera opens when the main view is swiped to the right. And you can see a ~300ms delay between when he swipes down and when the camera starts. This makes sense, because the way you open the story feature on FB is to swipe right. It's a good UX feature to start booting the camera as soon as you initiate the gesture, rather than when you've fully navigated so the camera is ready when you've finished your swipe.

Their layering system also pushes the view slightly to the right to create a perspective effect when it transitions back to the app from viewing an image.

Of course, this triggers the component that thinks the app is starting to pan to the right - and starts the camera and renders to the view behind the main view.

This seems totally plausible to be a bug - and I'm not sure why other commenters on HN aren't bringing this up.

Their app is fairly complicated, and it's totally reasonable that the team that worked on the story feature assumed that the only thing that would cause the view to move right was a user gesture. However, it looks like their layering system also caused it to move right as part of a perspective transition. This sounds like a bug to me.

If you really wanted to keep the camera open in the background, there are other ways to do so, such as literally not rendering the image to a visible framebuffer upon initiating the AVCaptureSession. iOS does not require you to attach a AVCaptureVideoPreviewLayer to the capture session - and you could very easily just take those frames and process them without ever showing a preview to the user.

The bug is not that they were capturing in the background, but that the perspective transform of the main view caused the view behind it (for the story camera feature) to think the user was swiping the main view to the right and to start up the camera to make sure it's ready asap. Running the camera in the background is actually pretty expensive - you don't want to suck power doing so. As much as facebook wants your data, it also wants you to regularly interact with the app. If users think the app is draining battery too fast - they'll use it less throughout the day to preserve battery life. Making sure it isn't a power suck is important to their core business.


I have heard that facebook's app delivery pipeline is automated (so that they could release in a day if they wish) but also that they dogfood their apps with employees. So not sure how such an obvious bug us released. Either their employees don't use the app or their 100% automation is pathetically bad.


Or they knew about the bug, but considered it just a visual glitch (not considering people might get upset about privacy implications), and didn't prioritize fixing it.


This may be true...

Previous versions used to take an age to load up the camera screen/swipe action was blocked (assuming the initialising process). So looks like this may of been their attempt of fixing the UX


They have been doing it for as long as I can recall. Its common practice. From my perspective, the problem is that they arent acknowedging they do that - they are instead saying its a bug that they have fixed - but you can be sure they are still preinitializing the camera...


what they meant is they fixed the bug that made people notice


Honestly this is the reason why I only give apps temporary permissions through an android app called Bouncer. Prevention is better than cure.


But they never said that they do not init camera in the background?

Or did I miss some part.


I think it’s just bug.


You'll have to excuse me for being at least a little suspicious when one of the biggest surveillance capitalism companies on the planet ships an app version with a "feature" that looks a lot like surreptitious surveillance.

Facebook have demonstrated capability to do accurate face recognition, and it's hardly a stretch to assume they couldn't do enough object and possibly brand recognition from a live video feed from you phones camera as you're using the app.

Perhaps people might not make assumptions or jump to conclusions like those, if the company was one that had a solid track record of respecting user privacy and getting the protection of privacy right. That company is not Facebook...

But what would I know, I'm just "a dumb fuck"...


Same goes to WhatsApp if your camera preview is accessible on swiping to the left or whatever. They must be sending small snippets of video/audio/images periodically, its too tempting to not do so at that scale if you already have the permissions.


Except that they'd be hung by their toes if they were caught doing that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: