Let me offer you other quote from different link [1] (Nature): "Equipped with more-rigorous statistical methods, researchers are finding that social-priming effects do exist, but seem to vary between people and are smaller than first thought, Papies says."
> but seem to vary between people and are smaller than first thought, Papies says. She and others think that social priming might survive as a set of more modest, yet more rigorous, findings.
You linked the full statement, and then focused in on only two words of that statement for your comment.
What about the linked article makes us think that this is something that the US government has the authority to step in and ban? We should skirt both the 1st Amendment and the free market over a social theory that might survive as a set of modest findings?
It's good that the courts are blocking this in the absence of a specific, credible threat. "Some effects do exist, but we don't really know what they are, and they're a lot more modest than the scare effects that people are familiar with" is not a specific, credible threat.
The threat of allowing the US government to get comfortable exercising ever more invasive control over the free market for questionable reasons, to the point of turning off or banning widely used communication channels -- that's a much more tangible threat than what I'm seeing linked in your article.
I am not talking about dangers of US government banning some app or service. I am talking about about dangers of the very existence of that service ruled by what I consider quite unfriendly people. Unfriendly not to me personally, but to the way US operates.
As usual, official justification and real reasons may be very different. The "security threat" may be direct and indirect. I tried to present a picture of indirect threat to the security of US.
> I am talking about about dangers of the very existence of that service
My point is that the indirect threat to the security of the US through priming is statistically tenuous, that there's little reproducible evidence even from optimistic researchers that priming works on a significant scale (or at all), and that even if the threat is real it's likely much smaller than the threat posed by this kind of over-regulation and government overreach. My point is that a threat this vague doesn't warrant this kind of discussion in the first place.
The link you posted optimistically describes a very limited, focused effect that is still actively being debated. Is there any accepted scientific evidence, at all, that priming in a 60 second TikTok video would have a larger effect on the average person's politics than seeing friends share fake news on Facebook?
I would like to note that your last question equalizes single 60 second video and fake news post. You are discarding effects of repeated viewing of different algorithmically selected videos on the views of viewers. It is an experiment that is being done right now by no less than TikTok itself.
Actually, my beef with the Facebook is exactly the same - one does not control the feed, the feed is being controlled by Facebook.
> You are discarding effects of repeated viewing of different algorithmically selected videos on the views of viewers. It is an experiment that is being done right now by no less than TikTok itself.
And is there any accepted scientific evidence, at all, that this would change whether we should be more worried about priming effects in a TikTok video than fake news posts directly shared by friends and family?
> Actually, my beef with the Facebook is exactly the same - one does not control the feed, the feed is being controlled by Facebook.
Our concerns with Facebook aren't related to priming. If your concern is that centralization and control over algorithms can be dangerous (especially in the hands of an authoritarian regime like China), then I agree, but I don't see any particular reason why TikTok should pose a unique danger in that regard. CraigJPerry's original comment you replied to still seems pretty on-point:
> Surely Facebook’s a bigger issue than 60 second videos since any bad actor can target messaging at susceptibile users and no one will know.
> [...] what is the method of using a 60 second video platform to share messaging with a crowd who will immediately scroll past anything heavy (or just plain stop using the app) to get to the next funny video? It seems a ultra low effectiveness medium for controlled messaging.
The main conclusion I have is that TikTok may pose a danger (just like any social media network), but the dangers it poses are not big enough or well-defined enough to justify this kind of intrusion into the free market. Is that something you agree with?
You keep asking for scientific evidence of effects of completely new phenomena. I cannot provide you that. On the other hand, I am pretty sure you cannot provide me with evidence of absence of effects of the experiment of that magnitude.
Facebook is also doing priming, by controlling what user sees. Heck, even television and radio back in the day were used for priming, albeit not in such a direct feedback way.
About "intrusion into the free market".
I see "free market" as a neutral or even negative thing. I do not see "intrusion into the free market" as necessarily negative thing.
I see ban of TikTok as somewhat positive thing given my view on situation. I really want to see regulatory actions on Facebook too.
- Social priming does not replicate;
- Social psychology has a replicability rate of 25%.