A common one is fake consent popups for system notifications.
Websites need to ask for consent before sending system notifications via the Notifications API. If a user declines, that website is blocked from asking again (for obvious reasons)
But many websites cheat this by showing a fake consent popup designed to mimic what the browser would show. If a user clicks "Decline" on the fake popup, the website won't show the real one to avoid being blocked. So the next time you visit the site, they'll be able to show you that popup again as many times as they want.
If a user finally clicks "Accept" on the fake popup (out of frustration probably) then they'll show the real popup. To most people, seeing two popups might seem like a glitch, and will just mindlessly click "Accept" twice.
The only way to circumvent this is to click "Accept" on the fake popup, and then click "Decline" on the real one. 99% of people aren't going to know how to do that.
...I'd post this myself as a comment, but I don't like that it's asking for so much personal information (full name, email, state, city, phone, etc)
A similar one is asking you to rate the app via the native modal (which does nothing) and if you rate with 5 stars they redirect you to the app store to vote there (where it counts). If you rate them with 1-3 stars they prompt you to leave feedback instead.
I agree that this is a dark pattern, but I also empathise with whoever first implemented this. Negative reviews are often just "this doesn't work", no further information. That's not actionable at all as a developer, and even if you somehow do fix the underlying issue, it's pretty difficult (or impossible) to get people to update their reviews.
The only problems here are the use of UI that mimics the native one and opening the App Store without users intent.
It's actually a good idea to ask the user for feedback internally, a lot of low star reviews are bug reports or help requests that wouldn't help anyone(those who don't have the app yet wouldn't know how relevant that issue is for them and the developers won't have a channel to communicate and help the user who is having the issues).
> It's actually a good idea to ask the user for feedback internally, a lot of low star reviews are bug reports or help requests that wouldn't help anyone
But importantly ask the user once, and only once, do not force the user to leave a review. Doing so will lead to more one star reviews along the lines of “wouldn’t stop asking for a review”
Also I immediately hate any app asking for a review. It may be useful for the developer but it’s user hostile imo.
That's really about the relationship with the user and timing of the request. The best practice is to ask user for a favour right after something good happens and they get value from your app as they will be glad that this app exists. You first give something to the user and ask the user to give you something back later, like "If you like the app please give us a review, it helps a lot" request. It's not a coincidence that all the successful YouTubers ask for a like and subscription if you like the video.
Interrupting a user action on the other hand, asking for a review over and over again are extremely annoying and can easily backfire. For the official review UI, Apple enforces "2 times per year per user per app" restrictions but if you annoy the user enough through your self made review request dialogs, they can get angry enough to find their way into your App Store page and give you 1 star review.
Forcing the user do something, trying to coerce them into a 5 star review in order to use the app backfires easily.
> hat's really about the relationship with the user and timing of the request. The best practice is to ask user for a favour right after something good happens and they get value from your app as they will be glad that this app exists.
There’s that user hostility again! Also I’m not sure how the YouTube example is related. When a YouTuber says hit subscribe the user can take a second to do it. They don’t stop what they’re doing and get forwarded to a different website entirely.
I recall experiencing at least one application that would crash if you declined to rate it in the app store. I'm sure it was just shoddy implementation, not handling some condition correctly, but it was hard not to feel like it was intentional.
On a related note, Google Maps has on occasion deleted reviews from businesses that used software that employed this tactic for requesting reviews. I’ve seen several businesses lose hundreds of five star reviews because of it.
To a certain extent I can't blame apps for doing this. It would be much better for Android and iOS to have a better experience for leaving feedback or giving reviews.
This is commonly referred to as a “soft ask.” The reasons for it are not always nefarious. On some platforms you cannot provide any commentary on why you want to send push notifications and so the soft ask provides a way to give more context on the next (real) permission dialog.
I’m not saying this isn’t abused all over, but when used effectively it can provide the user with more information to decide if they want to accept or not as well as allow the website to request it again at a future time possibly for a different reason.
I'm not referring to "soft asks". The dark pattern is creating a fake dialog that mimics the real system dialog in order to mislead, and circumvent a feature designed to protect users from spam/abuse.
Telling a user why they're about to get a permissions dialog, and displaying a real system dialog, is obviously not a dark pattern.
They're common as hell, but I can't seem to find a live example of what I'm talking about right now.
Almost every small local news website that wants to send you push notifications has started doing this—a sticky popup that they can show you as many times as they want, providing only a little more information then the actual permissions pop-up would, allowing them to bypass "only request permissions after user interaction" schemes and reducing their (UA-visible) decline rates.
> This is commonly referred to as a “soft ask.” The reasons for it are not always nefarious. On some platforms you cannot provide any commentary on why you want to send push notifications and so the soft ask provides a way to give more context on the next (real) permission dialog.
What I'm reading here is that you (not you specifically) want to ask for my browser permission, but know that the popup is non-descriptive and your one shot.
If you are nefarious, creating a fake popup makes perfect sense. You lower the risk and increase your chances.
If you are not nefarious, why even go for fake popups? Why not have a button in the corner? A choice in some menu? "Hey? Want updates from us? Click here!"
Wanting to do more commentary on why you want to send push notifications never non-nefariously leads to creating fake popups.
Why not make a user who wants notifications click a special button taking them to a special page where you show the notifications popup?
I'm sure if you asked, everyone who does this is going to tell you that "MY use of it is not nefarious. It's everyone else's fault that technique is abused."
Stop breaking the users browser. If you are, regardless of your intent; you are making a shitty experience for somebody, somewhere.
As long as it doesn't mimic the browser. (Another reason why user agent strings and other ways websites can identify the user's browser are a bad idea).
It doesn't matter if it mimics the browser. If you are trying to stop the browser from protecting the user as intended, it is a dark pattern, regardless of how well you camoflage the attempt.
In principle yes. But why would you imitate the browser if not to mislead the user? And who wants to mislead the user but has qualms imitating the browser?
People do all kinds of mental gymnastics to justify doing stuff they know is wrong. I would be unsuprised that people use the lack of camoflage as an excuse to justify using a dark pattern.
I get these all the time on mobile websites, and arrived at the same solution you did: tap Accept on the fake popup and then Block on the real consent box.
I suppose a site could trick me by making their fake popup look exactly like the real one. But in that case I would tap Block and be no worse off, other than seeing the same notification again the next time, at which point I would probably figure out their trick.
I thought of explaining this to some friends to help spare them some needless popups, but decided it was too complicated and would likely just confuse them. This is not an insult toward my friends, just a reminder that something that seems simple to you or me may be very puzzling for most people, no matter how intelligent they are.
Case in point, countless relatives calling "I have a virus on my computer!" because they have a Windows-XP popup on a website "you have a virus" (they are on Windows 10, or Mac OS).
App developers do this in their apps too when asking to rate the app in the App Store. They first show you a fake popup asking if you are enjoying the app OR want to send feedback. They will only show you the real iOS popup for reviewing the app if you tap "Yes" to the enjoying app.
It sort of is against the rules (disallow custom review prompts) but it doesn't seem to get enforced as far as I can tell. Even top apps like YouTube do this.
> Use the provided API to prompt users to review your app; this functionality allows customers to provide an App Store rating and review without the inconvenience of leaving your app, and we will disallow custom review prompts.
I took the risk and posted the (relevant) text of this comment and one of the replies. Someone's gotta at least try to bring this to the official sources, right?
Thank you! This is something that has been annoying the hell out of me on instagram using desktop Firefox.
Every time I login it prompts to show notifications; I always decline so it shows it again next time I log in. This time I accepted, but blocked it from within firefox.
I get it's not a dark pattern because it's clear it's not the browser asking, but still it's very annoying.
I’ve done this for an app but not for nefarious reasons. A huge part of the app is location based and users would deny location permissions and then not be able to turn it back on (you can go through settings but an awful lot of people don’t know how). The soft ask is one time when your start the app (with an explanation as to why) and if you deny access there it’ll only ever ask again if you tap something like the ‘use my location’ button.
If a site does something like this, a) You know that site is malicious. b) Choose another site. c) As users get more sophisticated these kind of tricks won't work.
I found that a lot of those have commonly named elements you can create rules for in noscript so you never see them. Not sure if it would effect the dark pattern ones though.
Highly debatable. In fact, trying to classify it as a dark pattern may derail the very valid discussion that the FTC is trying to have. There are significantly worse patterns out there. See [1].
What the push notification pattern is, is annoying. And it is specially annoying because of a prevalence of confirmation dialogs all over the Web with GDPR/CCPA, paid subscriptions, etc. But does it cause harm or monetary loss as the sneak into basket pattern? Or the opt-out unnecessary "insurance" that airlines continue to put in the checkout flow?
We do a disservice to ourselves littering the web with these constant asks. But it's not what needs regulation and enforcement.
The pattern tries to avoid an outcome desired by the user: permanently revoking consent for some permission. Only asking when you are confident the answer is Yes goes against the intent of the platform functionality, and I’d argue that’s a major dark pattern.
Similar to how apps used to ask “how do you like app?” And then only prompt to review the app if you responded favorably. Goes entirely against the app store’s intent to uniformly sample users, and I’m glad Apple at least has cracked down on this practice.
Just honestly make a product and stop trying to fool the user!
Like eBay asking over and over for me to trust the site for payment transactions. No, no. I have a separate paypal account with 2fa enabled rather than giving you my credit card for a reason. Asking 1,000 times will never change my answer, but eBay remains ever hopeful.
It’s a trick to try to take a yes / no choice and sneakily turn it into a “yes / ask me again later” choice. Silicon Valley in general seems to have a huge problem with the idea of user consent and permanently revoking consent.
Websites need to ask for consent before sending system notifications via the Notifications API. If a user declines, that website is blocked from asking again (for obvious reasons)
But many websites cheat this by showing a fake consent popup designed to mimic what the browser would show. If a user clicks "Decline" on the fake popup, the website won't show the real one to avoid being blocked. So the next time you visit the site, they'll be able to show you that popup again as many times as they want.
If a user finally clicks "Accept" on the fake popup (out of frustration probably) then they'll show the real popup. To most people, seeing two popups might seem like a glitch, and will just mindlessly click "Accept" twice.
The only way to circumvent this is to click "Accept" on the fake popup, and then click "Decline" on the real one. 99% of people aren't going to know how to do that.
...I'd post this myself as a comment, but I don't like that it's asking for so much personal information (full name, email, state, city, phone, etc)