I wish ad and social media companies would just tag ads (and ideally more popular articles) similar to how reddit mods can tag submissions. "Misleading", "Information cannot be verified", "Factually incorrect", "Paid for by XYZ", "Satire" etc. I would say they can probably do this with machine learning or some sort of crowdsourced system, but I feel like that is wide open to well poisoning attacks. Might be easiest and most accurate to send ads that get enough reports to a human to do tagging (or not if they don't think it is appropriate). Have several employees do tagging and if enough give the same tag, apply it. Warn all advertisers their articles can be tagged if they don't make an honest effort to accurately represent whatever it is they're advertising. It isn't perfect system, but I think it could work.
> Have several employees do tagging and if enough give the same tag, apply it.
When the scope of what you're tagging is expansive enough, inter-rater agreement isn't as useful as you'd think, primarily because you get correlated biases across your rater population.
These days the pendulum seems to have swung towards the left being fond of censorship and advocating for institutional control of "truth", so to use a left-coded example: good luck finding finding a population of raters in the US that doesn't fail the Implicit Association Test[1] wrt anti-black racism.
The "obvious" answer is to make sure your raters on each ad are balanced wrt biases, but doing so upon every axis that Google ads may refer to is literally impossible. You probably can't even articulate all the relevant axes.
It seems to be hard for people to imagine any ambiguity or flaws in the official truth that contemporary institutions push, so an easy thought exercise is to consider what this sort of an approach would have looked like during the Cold War, or even post-9/11. If you think those are anomalies and that we're in a post-lies, universal-truth era, I don't know what to tell you other than to suggest picking up a history book and realizing that there's never been such a thing.
[1] I'm aware of the flaws of this test and the associated studies, but what matters here is the perception of its usefulness.
Imagine the outcry if Google were to label an ad for Fox/Breitbart/Daily Caller/Trump as "misleading" or "factually incorrect". (Indeed I might get called out even here on HN for my "bias" given that I highlighted right-wing publications and politicians.)
No media outlet is immune to posting what may be considered factually incorrect information, for example, earlier this year as highlighted in [1].
Deciding what is "true" or not these days is incredibly difficult if not impossible when we have a POTUS who wants to control the narrative and calls what could be considered factually correct as "fake news".
Even reporting the "facts" has become incredibly difficult, especially when digital data is so easily manipulated (for example, how can we verify the integrity of a 'tweet' as it was published at a particular second in history? Is there a hash that should be provided? Screenshots can be easily manipulated, as some articles embed the tweet itself and can be later modified, even Spez on Reddit admitted to editing the database).
Everyone wants to control the narrative; this is simply the first time you've been faced with a right-wing POTUS that is capable to some extent of doing so, and it's helped you forget that all political parties are putting forward a narrative, because partisanship is a hefty drug. This is nothing new, which is both good and bad; it's not a novel crisis, but it's also something we can't get rid of.
Change the terms, and give the audience a button to punish the advertiser. I would totally click a button that says "charge this advertiser an extra dollar for being misleading".