> I’m sure, internally, it’s looked at as the “mainstream media trying to get clicks...
I interviewed with FB shortly after the Cambridge Analytica stuff came out (but long enough after that it was clear to everyone that FB had screwed up big time).
When it was my turn to ask questions in one particular interview, I gave a standard fallback when I have nothing else: "[despite all the blahblah positives], having worked at other technology companies, what's your least favorite thing about working at FB?"
This guy was a FB vet (maybe 5 or 10 years? Very long in FB time, I think). He gives me this spiel about how hard the teams work to maintain user privacy and how unfairly they're being treated by the public and the media, and how hard it is to work in an environment where everyone treats you so unfairly.
I was floored. Not even a hint of apology, remorse, or "we could have done X better". "Unfair. Fake news. We're doing great things, and no one thanks us enough." And from someone that had probably made a mint having been around at FB near-IPO time. It was my first or second of the day, and while no one else was so blatant, the sentiment persisted throughout the rest of the day.
I bombed the interviews hard, they didn't want me back, but I had basically decided I'd never work there by the time lunch rolled around.
Someone who worked at Facebook for the better part of a decade responded that they felt the media coverage was unfair to Facebook. You were suprised by this response because from an outsider's perspective you felt that Facebook employees should be apologetic? What makes you more qualified to judge whether the public perception was accurate as compared to someone who actually worked there - an "FB vet" in your own words?
Having dug into the Cambridge Analytica story, I can see a number of reasons why Facebook workers would feel wronged by the coverage. In particular, many outlets omitted or downplayed the fact that Cambridge Analytica had lied about the purpose of their data collection, falsely claiming that it was for academic research. Dozens of other academic institutions were similarly allowed to solicit data from users, yet those go unmentioned because most people don't see users voluntarily sharing their data for academic purposes. But that's what Cambridge Analytica was, from Facebook's perspective. The picture painted by the media was one where Facebook brazenly sold people's data to nefarious actors, when in reality Facebook treated Cambridge Analytica just like other academic organizations in allowing them to solicit users for data - except it turned out that CA had lied about the purpose of its data collection.
This story leaves makes me more empathetic towards your interviewer. If my company was so throughly demonized by the media such that candidates discount my own experience actually working at the company and are "floored" when when I tell them that the reality is different from public coverage, I'd be pretty salty too.
>What makes you more qualified to judge whether the public perception was accurate as compared to someone who actually worked there - an "FB vet" in your own words?
For starters, most of us here don't benefit financially from putting a positive spin on disasters at Facebook. As the Upton Sinclair saying goes, it is difficult to get a man to understand something, when his salary depends upon his not understanding it.
I've not talked to Facebook employees personally but I've witnessed the same degree of delusion at other large tech companies. Some are straight-up like a cult with employees willing to defend just about anything their company does regardless of how hazardous it was. Which is even more baffling if one takes into consideration that these are usually salaried employees working there only for a few years with no personal stake whatsoever.
The point isn't even to discuss some sort of internal nuance about the technical details of CA. If you are a company as large as Facebook and you allow user data to be abused to this degree, even if there is some nuance to it, when facing the public you apologize, tuck your tail between your legs, admit that you screwed up, and fix your problems, and you turn the arrogance and the world saving rhetoric down a notch.
> I've not talked to Facebook employees personally but I've witnessed the same degree of delusion at other large tech companies. Some are straight-up like a cult with employees willing to defend just about anything their company does regardless of how hazardous it was. Which is even more baffling if one takes into consideration that these are usually salaried employees working there only for a few years with no personal stake whatsoever.
Well I think I can solve this puzzle. It's possible some people disagree with you about how "hazardous" their company is.
They're not "cult-like" or "delusional". They're not willing to "defend just about anything" despite the fact that they are only there a few years.
They just don't agree with you on the amount of hazard their companies are doing.
Note: This is regardless of whether you are in fact right or wrong. But it kind of bugs me that someone not having the same views is branded in such a way, as if somehow clearly you know the truth, so anyone who isn't automatically on your side must be delusional/evil in some way.
> But it kind of bugs me that someone not having the same views is branded in such a way, as if somehow clearly you know the truth, so anyone who isn't automatically on your side must be delusional/evil in some way.
That's not what I believe at all. I've had plenty of discussions in my life with people from all kinds of sectors and the tech industry, in particular, FAANG employees in my experience stand out in this way. If you talk to someone from the Big 4 like PWC I never exactly got the impression that they're overly attached to their companies point of view.
Tech companies have very cleverly fostered some sort of ideological atmosphere among their employees that makes them defensive about their wrongdoings, and they have long pushed the idea that they're not just vehicles to create profit for shareholders but on world-saving missions.
As another example, remember when Uber essentially spammed mayor DeBlasio's office through their app in an attempt to undermine regulation and to effectively get ahead of the law through a harassment campaign? At that time I talked to Uber employees and a good chunk defended it.
Can you imagine any ordinary industry acting like this?
> Can you imagine any ordinary industry acting like this?
You mean proactively explaining to their workforce the reasoning behind actions likely to gain widespread attention in the press?
Yes; that's called treating your employees with respect. Everyone generally expects employees to have some level of insider knowledge, and it's polite to give your workers enough of a heads-up to not be blindsided by questions from friends and family.
The difference with Facebook and some other tech companies is that there's enough trust that employees are generally better informed about the strategic and competitive landscape the company is operating in, and that context can explain actions that may look nonsensical from the outside.
More industries should work like this, not less -- It's treating workers as people that can think for themselves instead of simply cogs in the machine.
I was working at Facebook when the CA story broke, but not when the events happened. I was in a department far removed from any of the involved parties. I haven't worked for them for several years now, and I currently own no FB stock outside of broad-based index funds¹.
As I recall, the sense of unfairness that was going around was rooted mostly in it feeling like old news. CA was the ghost of a bad policy that had already been rescinded, and there was very little awareness of that in the media coverage. Instead, there were loud calls for Facebook to do something, but every reasonable thing had already been done years before.
When I worked there, nobody believed that the policy which birthed CA was a good idea, which is why it was long gone. Also, everything had played out already in the public eye (in the tech press) -- anyone who had been paying attention should have known about most of these things already.
¹ It seems silly to make all these disclaimers, but they seem necessary with the mood here.
I remember all the puff pieces when Obama's campaign did something similar in 2012, where his app would drink down all the data it could from the social graph. When CA broke, I was asking people why they thought this was news and why I only used Facebook to shitpost on company pages, it wasn't exactly a well kept secret that you could do that.
There may have been effort to stop it from happening again, but from the outside it seems there has been zero effort to mitigate the harm already done, I don't think Facebook contacted anyone to tell them their very personal data had been downloaded by a third party because one of their Facebook friend participated in some poll, and that they would now be targeted by very personalized political propaganda .
> For starters, most of us here don't benefit financially from putting a positive spin on disasters at Facebook. As the Upton Sinclair saying goes, it is difficult to get a man to understand something, when his salary depends upon his not understanding it.
Ugh, this quote. If someone is working at a SWE at Facebook, their career options are probably pretty good. They can get a salary most anywhere.
Too often people use this quote as an intellectual shortcut to ‘I’m right, you’re wrong because your job blinds you to your bias’. Sometimes, they do just know more than you about it.
I’d suggest you take your own advice and dial up the humility a notch and quit branding people who disagree with you as ‘delusional’ or ‘cult like’
I think it's fair to say most people working at FANGs are probably earning significantly more than they would almost anywhere else. It's not just getting "a salary", it's getting a very large one that wouldn't be available if they decided it was morally wrong.
As a FB employee, I hear this sometimes - I can't take your point of view as you work at FB, instead of debating the fact straight up. It's just an Ad Hominen attack in a different form.
Academic research isn't some magic word that you just need to invoke, particularly when it involves personal data. You need specific informed consent and the sign-off from an ethics committee on your study plan. To not ask that of someone pretending to use your data for academic research is willful ignorance.
Of course, we are talking about Facebook, who have literally been in the news before for themselves running research experiments on uninformed, random users:
I remember playing with the FB tools back then, and I was surprised of how much information I could pull from it. As I can recall there were few bars of entry when it came to data collection.
The great irony to me is that Facebook does a lot of stuff people should be angry and scared about, but oddly enough the Cambridge Analytica situation, which broke this whole dam open, is not one of them, and is one where I think they are being treated unfairly.
The reason it ends up being an issue is because of politics: it gives people an explanation and someone to blame for an election that did not play out according to the expectations and/or hopes of half of the country.
Yeah, it's important to note that while my anecdote was from the "post-CA timeframe", they had also just had an exploit that resulted in mass harvesting of OAuth tokens[1].
And of course, a couple months later it came out that they were storing passwords in plaintext logs[2].
This wasn't just about CA. There was a class of problems that FB was facing, and the guy didn't acknowledge a single thing they could have done better.
I'm having a hard time believing that this is a legitimate opinion and not conjured up by Facebook PR.
Facebook is the one that collected the data. It is their responsibility to ensure that who they give your data to is who they are, and that they are doing what they should be doing to your data.
The should have a comprehensive compliance program for third parties with access to user data and have the necessary enforcement regime with enough bite to prevent abuse. Not to hide behind their policies until they get caught red-handed and then shrug off their responsibilities to everyone but themselves.
From what I gather, Facebook did have a compliance program in place to ensure that data was only shared with those that met its usage policies. Aleksandr Kogan was working at the University of Cambridge at the time, so the claim that he was using this data for academic purposes would seem credible to Facebook. Here are a couple sources that state that Kogan told Facebook that the data was to be used for academic purposes [1] [2].
Certainly, with the benefit of hindsight we can see that the potential for abuse for this kind of data sharing - even with the requirement that it's only used for academic purposes - is significant. But it seems to me that the primary culprit here is Kogan, who lied to Facebook about the use of the collected data, rather than Facebook whose fault was being too trusting of academics.
> From what I gather, Facebook did have a compliance program in place to ensure that data was only shared with those that met its usage policies. Aleksandr Kogan was working at the University of Cambridge at the time, so the claim that he was using this data for academic purposes would seem credible to Facebook
As though simply working at a university is sufficient reason to trust someone.
This person held a research position in the psychology department of a world renowned university, and claimed to be writing an app to gauge Facebook users' personalities. T This is more than just "simply working at a university", Kogan held a research position and proposed an app with a purported academic purpose that was in his field of study.
You would be surprised. One of the other criticisms currently is that FB doesnt share data with academics to do research as freely as academics would like.
Which is why their compliance process wasn’t rigorous enough.
As others have said, requiring evidence of University IRB review (which it doesn’t sound like they did) would have been a way to require more safeguards. Yes, bad actors could and maybe would have still abused the system, it makes their work harder and more visible.
>But it seems to me that the primary culprit here is Kogan, who lied to Facebook about the use of the collected data, rather than Facebook whose fault was being too trusting of academics.
And what did FB do as due diligence? Very little. When FB discovered the breach, what did they do as a response and to mitigate the effects to the affected users? How did they recover damages and/or enforce specific performance of contract terms to not only remove the data in question but all of the products that resulted in the processing of the data? Again very little.
When banks gave out mortgages like how FB gave out user data, the result was massive financial crisis. And now with FB, we got (more) idiocracy.
> And what did FB do as due diligence? Very little. When FB discovered the breach, what did they do as a response and to mitigate the effects to the affected users? How did they recover damages and/or enforce specific performance of contract terms to not only remove the data in question but all of the products that resulted in the processing of the data? Again very little.
They revoked Cambridge Analytica's access to Facebook's data, and told Cambridge Analytica to delete the data they had gathered. And Cambridge Analytica again lied to Facebook and told them they had deleted the data. That's the extent of what Facebook could have done. If Kogan broke laws - and he probably did - that's the government's prerogative to charge and prosecute him.
Likening this scenario to banks giving out mortgages that they know debtors cannot pay off is not an effective comparison. This is more like someone securing a loan by falsifying their income. In both cases the customers were harmed. Facebook users' data was used for purposes to which they did not consent, and the bank customers' money was loaned out at excessive risk. But the culprit that is responsible for this is the one that deceived the company, not the company itself. One could reasonably argue that Facebook should have been wise enough to avoid being duped, but that's still much more generous to Facebook than the bulk of the coverage I read that attempted to assign primary blame on Facebook rather than Cambridge Analytica.
Facebook could have sued them into oblivion to demonstrate commitment in enforcing their data protection policies instead of relying on their word to delete the data when they broke their word by misusing the data in the first place.
>That's the extent of what Facebook could have done. If Kogan broke laws - and he probably did - that's the government's prerogative to charge and prosecute him.
No. Facebook is aware of the damage that could have been done. Relying on a party that already breached their terms of use to the data (of FB users) to keep their word is negligence. They should have 1. structured their relationship better so that their is contractual and financial recourse from the third-party and 2. carried out a full investigation into the breach at the time they were notified, not when it was reported by the media.
Also Facebook did in fact breach UK data protection laws and was fined by the ICO for its role in the CA scandal. It was found that their data privacy policies and processes was insufficient. Unfortunately this was before the GDPR and hence the maximum fine that could be imposed was insignificant at £500K.
#1 is speaking from a position of hindsight. As I said, we can claim that Facebook should have been more skeptical of the intentions of university researchers, but this far from being negligent in their enforcement of their data use policies.
#2 Demonstrates persistent misunderstandings of what events transpired. Facebook was not breached in any way. Again, Camridge Analytica did not hack into Facebook's systems. This was Cambridge Analytica's subsequent misuse of the data that they had collected with Facbook's consent, but under stricter terms than the purposed for which Cambridge Analytica subsequently used the data. Facebook ordered Cambridge Analytica to delete the data.
2. Breaching data privacy laws does not mean there was a data breach or break-in. It just means that your handling of user data is in contravention of the legal requirements.
1. The ICO disagrees with you. Facebook was fined specifically for breaching data privacy laws in the UK.
I have a hard time understanding why your first sentence is necessary. Nothing against what you said subsequently (and I think it's totally right), and myself disliking facebook, your first sentence is entirely unnecessary and beside the point.
From what I read, your parent comment sound entirely logical coming from a different POV. I don't see anything illegitimate about it, even if it WAS Facebook PR, the point within are entirely valid.
The comment I replied to completely ignored FB's responsibility to its user's data and rather than accepting that FB was supposed to be responsible, proceeded to blame everyone else but themselves.
I just don't see that as reasonable in discourse. I mean you can hold those beliefs, but if you do express them in public I think it's right to be called out on how unacceptable that is.
Yes, attack the idea, not the user. I also don't see you guys are on the same page either. For one, was there false reporting? It is entirely possible to do 5 bad things and have media report you did 10 bad things. And while 5 bad things was committed it doesn't excuse the media for wrong reporting, not saying that they did this in this case.
That's exactly the reason why it feels like a classic PR tactic.
Instead of discussing what went wrong and how FB's policies and operations are deficient, the conversation is being shifted towards how much the media is biased towards it.
FB is not the victim here. Hence to have a productive discussion, being on the same page is indeed important - as in accepting first and foremost what FB's responsibilities are.
The conversation did not shift, both started from the opposite end and did not move, there were no conversation to begin with.
And this is why we have the schism in our society. No one is willing to find the middle ground and listen any more. Everyone simultaneously believes that listening is others responsibility.
As I said, you need to agree there is a problem before we can talk about it productively. This is "being on the same page".
If there is a obvious problem that one side is steadfastly refusing to acknowledge and insists on blaming it on others, then yeah we have a schism in our society as one side is just ignoring reality.
> Sure, like a bank has the duty to protect your safe deposit box, but CA defrauded Facebook the way a bank can be robbed.
Choosing to give away your property to third parties is not comparable to being robbed. Facebook collected the data on unsuspecting users and proceeded to willingly give away the data. Thus Facebook is responsible for what comes out of it. There is no way around it, and it boggles the mind how this fact is brushed aside.
> Choosing to give away your property to third parties is not comparable to being robbed. Facebook collected the data on unsuspecting users and proceeded to willingly give away the data. Thus Facebook is responsible for what comes out of it. There is no way around it, and it boggles the mind how this fact is brushed aside.
This crucially omits the part where Cambridge Analytica lied about the purposes of the data that was collected, and subsequently lied again when Facebook learned of this deception and demanded that the data Cambridge Analytica collected be deleted.
You're right this isn't like a bank robbery. This is more like someone securing a loan and then running off with the money. The bank's customers were harmed and one could criticize the bank's scrutiny of its debtors. But there nefarious party is the one deceiving the bank.
> This crucially omits the part where Cambridge Analytica lied about the purposes of the data that was collected,
This line of complaint is absurdly disingenuous. The problems that Facebook has created are not solved with EULAs, and Facebook's responsibility on having created this whole mess is not brushed aside by claiming that third parties did not clicked on the right checkbox when downloading Facebook's data.
This is precisely the type of PR problem that Facebook creates for themselves: this insistent, desperate, cynical, and pathetically inneficient way they try to pin the blame on others for the problem Facebook single-handedly created. Force-feeding this nonsense through astroturfing campaigns doesn't change the problem and the responsibility that Facebook has.
> The problems that Facebook has created are not solved with EULAs, and Facebook's responsibility on having created this whole mess is not brushed aside by claiming that third parties did not clicked on the right checkbox when downloading Facebook's data.
"Third parties did not click on the right checkbox when downloading Facebook's data" is not even remotely close to what happened. The fact that this perspective is so common is big party of why I doubt many people received coverage of the events that was even close to objective.
Alexandr Kogan was a senior research associate at the University of Cambirdge, and developed a personality quiz app that collected data that he claimed he would use for academic purposes. He subsequently used this data for commercial and political purposes, and when Facebook discovered this they revoked Kogan's app's access and demanded that he delete the data that he had collected. Kogan told Facebook that he had deleted the data when he had not done so. This wasn't third parties not checking the right box, this was a deliberate and involved plan evade Facebook's data use policies.
Only if you're being metaphorical in what you meant by "checking the wrong boxes". Kogan had a research position in psychology at a world renowned university. He leveraged this position to claim that his work was for academic research on psychology, and then turned around and used this data for commercial purposes. This isn't some random app developer checking a box when they publish their app. And when Facebook learned that these restrictions were being breached, they revoked Cambridge analytica's access and demanded that the data be destroyed.
In which case, the bank should be held responsible for your loss. And they should have adequate security in place to prevent it from happening in the first place, or refrain from that line of business. These are all costs that FB has skirted.
Unfortunately, the problem with your analysis is that your view only makes sense if seen in isolation.
A lot of us are evaluating the entire picture, based in large part on the track record of Facebook over the last 10+ years.
For example, everyone who takes a look at the "friendly fraud" [1] case comes away thinking "Boy, these Facebook employees will stop at nothing to make a buck". Now layer that on top of whatever came out during the CA scandal, and now you can see that the issue is Facebook employees are actually acting like a cult.
And by the way, nothing has changed. You can see this in how every time someone from Facebook does any PR at all (e.g. podcast interviews), they take a lot of care to make sure they don't go on podcasts which bring up the privacy issue.
Here is an open challenge to any Facebook employee who is reading this comment - go on an interview with a known "hostile" who is also not considered an idiot conspiracy theorist - e.g. DHH - and have them interview you. You know that you would never even consider doing it, because you do have a lot of shitty things still going on at Facebook that you wouldn't want to later contradict.
By the way, the same challenge applies to folks working at Google.
A common argument (that you also make) is that if you're on the inside, you obviously know better than those on the outside. There's a lot of reasons to believe that isn't true.
First, you may not have the right training (e.g., media studies, media economics, privacy). Second, you favor your coworkers because they are your friends. Third, the positive news and justified criticism of external critics is widely shared internally, but the points that are appropriately critical are much less shared. A single flawed external article can lead to defensiveness, making it that much easier to dismiss tens of other appropriate articles.
More poetically, it's like the Three Blind Men and elephant. The conviction of the blind men is so great, but their closeness leads to a misplaced confidence that they know the answer:
I interviewed there this summer because the position/compensation offered was just too good to not consider seriously. Like, top 10% at FB kind of money, which is a lot. During the interview process, I asked many FB employees about their thoughts on FB in the news, the headlines, scandals, etc. All super smart and dedicated people, but anecdotally there was no hint of them thinking they had done anything wrong. They explained for example how Cambridge Analytica was totally overblown in the media, specifically because users opted in, and the media was just bashing them. They did say they wouldn't follow the same route again, but didn't seem to consider themselves responsible ethically/morally. I can see how one might debate whether having an opt-in absolves FB of responsibility, but it had a smell to it and didn't necessarily jive with my personal ethical framework. I did get an offer, and not necessarily for that reason ended up declining. But it parallels what you said except in May 2019. I would consider taking a job there in the future and I got the sense that they're working to turn things around, but what high level people at FB said to me largely seemed to mirror your experience many years later so I thought I'd share.
>They explained for example how Cambridge Analytica was totally overblown in the media, specifically because users opted in, and the media was just bashing them. [Emphasis mine]
That is really troubling. Excusing nefarious behavior because "well the user opted in" is a downright horrible way to rationalize bad behavior. Users never read the TOS. They barely understand the privacy settings (though I've heard they've gotten better ... haven't had Facebook in years). I tend to think that, yeah, Cambridge Analytica was probably overblown a bit for a number of reasons, but wow. I know a couple people who have quit Facebook (one recently, one a couple years ago before the election) and a couple who have stayed. I have to say, the ones who've stayed have drifted away from the rest of our friend group. Sad to see.
I would also imagine that, especially during interviews, they recognize their duty is in part to sell the candidate on the firm. There's an implicit pressure to be positive about your company because there's only downside to saying negative things, even if it's honest. I don't see why an employee would put themselves on the line with a candidate, who they may never work with nor see again, just to express what they think is right.
If you're trying to hire "top 10% pay at FB" smart level people, hiding behind poorly constructed rhetorical barriers is more of a negative thing than recognizing a real negative thing. It doesn't have to be anyone's "fault" even, but more of a consequence of the system that was built with good intentions.
The people probably had a moral conflict: the good Facebook money but lose "their souls", or go with their conscience, quit and lose that money and work friendships/network. So, so they can sleep at night they've convinced themselves the company they're a part of wasn't the responsible party. So "Hey, those users opted-in, it's their own fault!".
Funny how it means the company is now probably full of either "ignorance is bliss" or morally bankrupt people, because the people with good conscience have left. Not that society at large is much different... what uncomfortable truths are we ignoring?
I know a few people who work there too. I can have an intelligent conversation/debate about almost anything with them, from tech stacks to software development methodology, but bring up privacy and Cambridge Analytica, and all of a sudden the wagons circle and it’s all “we’re so misunderstood,” and “the biased media is out to get us,” and “we do so much for user privacy and always just get shit on!” Its as if they all went through the same training and got the same talking points. Spooky!
I don’t doubt that Facebook does a lot for user privacy, but it is probably the case that nobody there (except maybe Boz) really, thoroughly understands the enormous amount of power and influence the company wields, and they also haven’t realized that the thing they have worked hard to create is now an out-of-control monster.
Yes. A few hundred thousand people took the quiz and Facebook's platform allowed them to therefore gather information on tens of millions of people. Facebook had a policy at the time that the extraneous data could only be used to improve user experience, not third-party data harvesting, but reportedly it was weakly enforced.
It doesn't pay to speak up. We glorify those who made a change but truth is that 99% of those who speak up even with small things quickly learn to shut up or leave for greener pastures. Remaining in the company are the YES-men and outside of regular employment those who speak just to breathe. Still, we encourage everyone to speak up to be able to weed out the black sheep before they cause too much damage.
I shared a Lyft ride with a Facebook employee right after the CA controversy. I asked her what do employees think about it.
She told me that people are really worried about how the controversy affected the stock price. (Facebook stock was down around that time.)
I'm not sure how many employees worry about stock price than the supposed effect of the CA controversy but I was disappointed to hear that from a Facebook employee.
Facebook et al are knowingly leading the social collapse of countries because of the same thing that led the bankers to knowingly lead the world economy to the brink of collapse.
How was Facebook an 'horrible actor' in the CA scandal exactly?
Facebook had relatively permission APIs. The entire world knew it. Journalists, developers, Big Cos.
Nobody was screaming that there was a problem. Though there were a few issues around privacy, there wasn't really any public dialog specifically around the nature of those APIs.
CA found a sneaky way to take advantage of somewhat open APIs.
As the world started to become somewhat more concerned, and FB saw some room for potential abuse - FB rightly tightened up the APIs a little. This was long before the CA scandal blew up.
Then it became public that CA was doing some sneaky things with the data. It's actually highly debatable if they did anything wrong within the context of the information available - or at least outside of industry norms -> what CA was doing with data from FB is what everyone is and was doing with data from similar sources.
FB called on CA to erase the data, and checked that they did. It turns out, CA lied and did not.
FB did the right thing every step of the way in the CA scandal. APIs of every kind err a little bit one way or the other over time and as issues surfaced, FB moved in the right direction without coaxing.
The media absolutely misrepresented the entire issue. They didn't really make it clear what happened, nor did they clarify what exactly FB did wrong, but most importantly, they misled the public with respect to a separate, secondary issue which was 'special API access'.
The 'scandal' is that there are tons of companies like CA using private data for all sorts of reasons, it's mostly not FB.
There are definitely privacy issues around FB, and I'm not fan of them, but those are a separate issue.
I'm guessing it might be that the people working hard on maintaining user privacy are not the same as the people making the decisions, so the former feels they're treated unfairly.
Based on the op's description, it doesn't seem like this particular FB employee minded those decisions that much. In fact, they sounded pretty enthusiastic about them.
I interviewed with FB shortly after the Cambridge Analytica stuff came out (but long enough after that it was clear to everyone that FB had screwed up big time).
When it was my turn to ask questions in one particular interview, I gave a standard fallback when I have nothing else: "[despite all the blahblah positives], having worked at other technology companies, what's your least favorite thing about working at FB?"
This guy was a FB vet (maybe 5 or 10 years? Very long in FB time, I think). He gives me this spiel about how hard the teams work to maintain user privacy and how unfairly they're being treated by the public and the media, and how hard it is to work in an environment where everyone treats you so unfairly.
I was floored. Not even a hint of apology, remorse, or "we could have done X better". "Unfair. Fake news. We're doing great things, and no one thanks us enough." And from someone that had probably made a mint having been around at FB near-IPO time. It was my first or second of the day, and while no one else was so blatant, the sentiment persisted throughout the rest of the day.
I bombed the interviews hard, they didn't want me back, but I had basically decided I'd never work there by the time lunch rolled around.