Surely Facebook’s a bigger issue than 60 second videos since any bad actor can target messaging at susceptibile users and no one will know.
This just seems like yet another case of Republican double speak. Say one thing do a different thing that just coincidentally happens to grab power over others.
Also, what is the method of using a 60 second video platform to share messaging with a crowd who will immediately scroll past anything heavy (or just plain stop using the app) to get to the next funny video? It seems a ultra low effectiveness medium for controlled messaging.
TikTok, being a massive analytical (see other discussions at HN) and priming platform, allows for quite interesting effects on the humans to be discovered and/or tested.
Facebook is primarily text which is low bandwidth and it goes through higher thinking. TikTok is high bandwidth from the start and is primarily analyzed using much older parts of brain.
I do not like Facebook (don't have account) and I vehemently hate TikTok.
Priming is one of the least reproducible effects out there and lies literally at the core of the reproducibility crisis [1]. I'm going to call BS on claims of some sort of spooky weaponization of it.
Edit: To add necessary context for those unaware, TikTok's audience in the US is politically liberal. The buy out of Trump rally tickets a few months ago was organized there [2].
It's true that we should be talking about the security implications of foreign control over our social media platforms. It's also true that we should be concerned about a targeted ban (instead of, say, a general crackdown on foreign social media) on what amounts to a meetinghouse for Trump's critics by the Trump administration itself.
To quote the link of yours: "Five years later, Kahneman’s concerns have been largely confirmed. Major studies in social priming research have failed to replicate and the replicability of results in social psychology is estimated to be only 25%"
I cannot discern whether author says that social priming research has replicability of 25% or social psychology has replication success of 25% and priming research has even less.
The scale at which TikTok operates is much larger than any studies that can be performed by a group of scientists. This is true for audience (or test subjects), amount of hardware used and for information gathered.
The link you provided still allows for some replicable effects to be present. Effects that were hypothesized, planned for discovery and then tested and brought to the light of day.
At the scale of TikTok, effects can be discovered without hypothesis first, just by using sheer analytical power available.
Let me offer you other quote from different link [1] (Nature): "Equipped with more-rigorous statistical methods, researchers are finding that social-priming effects do exist, but seem to vary between people and are smaller than first thought, Papies says."
> but seem to vary between people and are smaller than first thought, Papies says. She and others think that social priming might survive as a set of more modest, yet more rigorous, findings.
You linked the full statement, and then focused in on only two words of that statement for your comment.
What about the linked article makes us think that this is something that the US government has the authority to step in and ban? We should skirt both the 1st Amendment and the free market over a social theory that might survive as a set of modest findings?
It's good that the courts are blocking this in the absence of a specific, credible threat. "Some effects do exist, but we don't really know what they are, and they're a lot more modest than the scare effects that people are familiar with" is not a specific, credible threat.
The threat of allowing the US government to get comfortable exercising ever more invasive control over the free market for questionable reasons, to the point of turning off or banning widely used communication channels -- that's a much more tangible threat than what I'm seeing linked in your article.
I am not talking about dangers of US government banning some app or service. I am talking about about dangers of the very existence of that service ruled by what I consider quite unfriendly people. Unfriendly not to me personally, but to the way US operates.
As usual, official justification and real reasons may be very different. The "security threat" may be direct and indirect. I tried to present a picture of indirect threat to the security of US.
> I am talking about about dangers of the very existence of that service
My point is that the indirect threat to the security of the US through priming is statistically tenuous, that there's little reproducible evidence even from optimistic researchers that priming works on a significant scale (or at all), and that even if the threat is real it's likely much smaller than the threat posed by this kind of over-regulation and government overreach. My point is that a threat this vague doesn't warrant this kind of discussion in the first place.
The link you posted optimistically describes a very limited, focused effect that is still actively being debated. Is there any accepted scientific evidence, at all, that priming in a 60 second TikTok video would have a larger effect on the average person's politics than seeing friends share fake news on Facebook?
I would like to note that your last question equalizes single 60 second video and fake news post. You are discarding effects of repeated viewing of different algorithmically selected videos on the views of viewers. It is an experiment that is being done right now by no less than TikTok itself.
Actually, my beef with the Facebook is exactly the same - one does not control the feed, the feed is being controlled by Facebook.
> You are discarding effects of repeated viewing of different algorithmically selected videos on the views of viewers. It is an experiment that is being done right now by no less than TikTok itself.
And is there any accepted scientific evidence, at all, that this would change whether we should be more worried about priming effects in a TikTok video than fake news posts directly shared by friends and family?
> Actually, my beef with the Facebook is exactly the same - one does not control the feed, the feed is being controlled by Facebook.
Our concerns with Facebook aren't related to priming. If your concern is that centralization and control over algorithms can be dangerous (especially in the hands of an authoritarian regime like China), then I agree, but I don't see any particular reason why TikTok should pose a unique danger in that regard. CraigJPerry's original comment you replied to still seems pretty on-point:
> Surely Facebook’s a bigger issue than 60 second videos since any bad actor can target messaging at susceptibile users and no one will know.
> [...] what is the method of using a 60 second video platform to share messaging with a crowd who will immediately scroll past anything heavy (or just plain stop using the app) to get to the next funny video? It seems a ultra low effectiveness medium for controlled messaging.
The main conclusion I have is that TikTok may pose a danger (just like any social media network), but the dangers it poses are not big enough or well-defined enough to justify this kind of intrusion into the free market. Is that something you agree with?
You keep asking for scientific evidence of effects of completely new phenomena. I cannot provide you that. On the other hand, I am pretty sure you cannot provide me with evidence of absence of effects of the experiment of that magnitude.
Facebook is also doing priming, by controlling what user sees. Heck, even television and radio back in the day were used for priming, albeit not in such a direct feedback way.
About "intrusion into the free market".
I see "free market" as a neutral or even negative thing. I do not see "intrusion into the free market" as necessarily negative thing.
I see ban of TikTok as somewhat positive thing given my view on situation. I really want to see regulatory actions on Facebook too.
> TikTok's audience in the US is politically liberal
If anything, then that audience is a very interesting target for either:
- trying to convert them to more conservative ideas, by feeding them with messages that play on their feelings
- trying to get them to not vote, by discouraging it
- trying to incite violence against the other group in order to destabilize the country
Especially the last one is interesting for China. The second one has been (by their own word) effectively used by Cambridge Analytica in Trinidad to influence an election by discouraging young people to vote [0]. Imagine if there was an app, mostly used by impressionable teens and young adults, and you had unlimited access to the users of that app.
I use Facebook: my experience is it's probably 50-50. Which means far more content (by duration or bandwidth) is images/video. I rarely look at the videos.
As an erstwhile small business owner it became very clear that business page posts with images always get more views so all informational posts always have images if you want to improve your engagement scores.
Most friend posts that are popular enough to spread to me include a gallery or shared video.
People make text comments on news stories and link a news website, which then pulls in an image/video. So often the headlines/images are misleading - as one finds on HN too.
I used TikTok for 6 months or so, even posted a video. To me the only content was ever goofy dances, magic tricks, acrobatics and 'pranks'. No more harmful than any other human-targeting Skinner Box.
Does your wife heavily use tiktok? It's possible that different cohorts of Facebook users experience the product in different ways. I can imagine that those drawn to Tiktok also make use of insta stories or fb stories (short videos) etc.
> Facebook is primarily text which is low bandwidth and it goes through higher thinking. TikTok is high bandwidth from the start and is primarily analyzed using much older parts of brain.
Agreed. Reading and writing are recent inventions and involve thought and deliberation. There's a reason why such skills were rare and highly valued in antiquity. It's reasonable to assume that a medium with an emphasis on text is less prone to (but not entirely exempt from) manipulation.
On a somewhat unrelated note, people like Elon Musk believe that "information overload" can be solved by increasing the bandwidth between the medium and the brain. I wonder if that's true, or whether the lower bandwidth is necessary because our newer brain (prefrontal cortex) simply isn't fast enough.
Theoretically, CCP could abuse location services to track movements of military personnel and civilians in a war. Since FB isn't Chinese, that is less of an issue for large scale surveilance. Also, if the Chinese had a zero day that they could exploit by pushing a malicious TikTok update, they could get root access to these phones and use them for surveillance on a large scale.
Sure, this is all hypothetical, but it's not unthinkable in an all out cyber war. Nations are stockpiling zero days and other hacks just to have an advantage when it becomes necessary.
Note: I don't have TikTok, so no clue if they actually use location services, but every app seems to do that nowadays, so I just assumed it does.
Fed government could simply order all federal employees off tiktok or any other service without legal hurdles. What is the point of banning for the general public.
Let's say a child of a worker working in a nuclear reactor site for "Bring your daughter to work-day". She's ofcourse using tiktok, and guess who has that data now.
This is just an example. If I spend enough time, I can think of a whole lot more examples.
Every single secure facility I've ever been to required you to leave your phone at the gate, and this was pre-smartphone days. The more serious ones prohibited even disk-on-keys.If you're relying on kids keeping their phones secure for your national security, you've already lost.
Let's say the nuclear reactor site hosts a "post pictures of our site to Instagram" day. Then who has the data? And then suppose that the worker gets a text from someone impersonating his wife that says, "honey, you should text me some schematics of the plant." That would also be really bad.
And then suppose that the worker adopts a Chinese daughter, but really the daughter has been a Chinese spy the entire time, and on bring your daughter to work day she excuses herself to use the bathroom but actually goes and sabotages the plant by pulling an unguarded lever in a control room, sparking WW3. What is the US doing to protect against that happening?
Maybe you think these scenarios are unrealistic or that there are simple security measures that can mitigate them, but I want to stress that these are just examples. If I spend enough time, I can think of a whole lot more examples.
That's such an odd and incredibly unrealistic scenario. Also, in your hypothetical example, there's nothing that could be gained from TikTok that they couldn't have already figured out from satellite imagery.
> Fed government could simply order all federal employees off tiktok or any other service without legal hurdles.
They already have [0]. Federal employees, however, do have personal cell phones. There have been multiple cases of militairy information being leaked through Facebook posts [1]. Also, these people have conversations with their significant others, who also have personal cell phones. Recently, a list of PoI's leaked from a Chinese data collection company, some of which were children of people in power [2]. It is not completely unthinkable that the phone of a child is hacked and is then used to compromise the home network of an influential person, or something like that.
I'm not saying that TikTok would facilitate this, I don't know. If there's any intelligence indicating that it's a possibility I fully understand that there is a response, however.
No thats strava fitness app with documented cases of this. If this was the concern, you’d see a lot more concern over fitness tracker apps that log routines and times.
Yeah. I don’t buy this argument until they ban all Chinese apps. As far as I can tell, it’s just a way to cut off younger voter’s access to information that is critical of Trump. And that’s a generous reading.
I use both Reddit and TikTok, and I would say Reddit is far more overtly liberal leaning by default, with pockets of right leaning folks. Reddit or Facebook are also much more natural meeting places which can foster discussion or even event planning. In contrast, comment threads on TikTok are kind of a mess; it's more of a broadcast platform.
The only reason Trump has been able to pull this off is the fact that minors can't vote.
If you deny voting rights to a certain group of people, politicians just don't have to care about them at all. They can do things that hurt that group, and no one is going to hold them accountable.
At this point, I believe we should stop discriminating based on age.
> At this point, I believe we should stop discriminating based on age.
If the age of voting is lowered to 6 (as some have asked for: [1]) or even removed altogether what stops me from producing 5-10 kids and then having them vote for a political party that I support? That is 5-10 extra votes in just a decade. And if 1000 families do that, that would significantly tilt the outcome of an election. By the time they grow up to understand what their rights are and that they were being exploited by an adult they would feel disgusted that they were used (or rather misused). You are assuming that minors have inherent understanding of political issues at that young age. Most don't. And it is not reasonable to expect them to know.
Voting should only be enabled for those who are completely aware of their rights. Children typically are not. They are not political by nature. They are playful and more interested in learning things around them. Turning them into political ballots waiting to be exploited by adults around them is the last thing we need in this already fractured World.
Yes but if you are an adult I can't coerce you into voting for a party of my choice. Children can be coerced. Children can be used as pawns. Schools can turn into political arenas. There are so many things that can go wrong by introducing politics at that young age. It is rare for adults to fall for peer pressure as you would have matured by then. Kids and teens can easily succumb to peer pressure.
Do you really want your children to be exposed to political influence at such a tender age when they are already dealing with peer pressure, bullying and all other negative aspects that come with school life?
That is precisely what I am talking about: peer pressure/influence. You are just confirming my point. We already have enough influence on children from different angles: Religion, Race, Sex etc. I am saying that children should not be forced into these things and instead just allow them to explore the World in their own way. Let them learn about everything they feel interested in. This is not an age to be force-fitted into a mould. This is an age where they are growing/learning. Once they are ready they should be given their voting rights.
Giving them voting rights at such a young age will just make them all into activists or worse politicians. I see so many child activists who are just being exploited by their parents for making a quick buck. Most of what these children utter on TV is just stuff that have been told by their parents. If you talk to them outside of all this media circus you'll know that they are innocent and don't have much idea about the World except for doing what they are told.
They are just not prepared for it. The moment you go off script the cookie crumbles. But keep aside politics/activism for a moment. Do you think it is fair that a child has to go through this sort of exploitation for what should be adults coming out with solutions? Because let us face it. This is exploitation. The child needs to be in school studying and enjoying with friends. She will never get back this youthful life again. We as adults are literally using children as props to settle our political issues. This is immoral and not right.
I do agree that this is about "controlling" younger / new of age voters.
However, I'm less worried about whatever correct/incorrect info they get about Trump or politics in general. And more worried about the disturbing cultural trends that this app and others like it are amplifying. Culture can have strong effects on people's political/voting behavior and that's what I think is being addressed (as opposed to the spying/privacy/CCP concerns which are secondary yet used as the main reason behind this ban).
I am pretty sure that Facebook/Youtube/Twitter will be targeted next in some way, but probably not with outright bans such as what they did with TikTok.
>> I am pretty sure that Facebook/Youtube/Twitter will be targeted next in some way
Amnesty international was refused entry to observe the Assange trial. I don't know how it is on your FB search, your twitter search, even google news search, but this is harder information to find than i'd expect.
Perhaps this genuinely isn't as newsworthy as i was thinking it should be but then i feel like i have been privvy to every minute detail of Carrie Lam (and the CCP) vs the protestors.
But they cannot buy the same data - the quantity and quality of the data they obtain themselves (through apps like TikTok and WeChat that are running in the background 24/7 while collecting any information possible, such as secretly copying your clipboard which is likely to reveal passwords and other sensitive information) is far higher than what they could pay for.
You’re also ignoring the fact that they can use their apps to easily spread propaganda and influence elections. They’ve already censored videos that criticized the Chinese government. And inside the app then they have a very prominent “Covid-19” button where they can cherrypick what information to display.. they’ve for instance chosen to omit the fact that the Wuhan Virus originated in China.
For the record, "Wuhan Pneumonia" and "Wuhan Virus" were widely used by official news coverage in Taiwan, Hong Kong, and guess what, Mainland China, before the WHO suddenly dictated all people must use "Covid-19" or else you're racist or propagandist.
This trick is very effective in erasing the obvious link between the virus and its origin.
Yes, of course the goal is to erase the "obvious link" between the virus and the its origin. The thing is, people are irrational and will start discriminating in weird and unnecessary ways based on where a virus came from.
Nobody is trying to erase "history" or anything. Anybody actually interested can always read about the origin location of the virus in Wikipedia. This is all about keeping things straightforward for the otherwise uninformed.
Note that this is a standard virus naming methodology. The Ebola virus was similarly intentionally named to avoid unintentionally creating a target for discrimination.
It shouldn't matter whether you think Wuhan virus or COVID is the propaganda-name, a company just defaulting to the most common term for their market is not a useful sign of propaganda.
Using Tik Tok's Covid information as proof of propaganda is just looking at noise and pretending it's data.
> Theoretically, CCP could abuse location services to track movements of military personnel and civilians in a war. Since FB isn't Chinese, that is less of an issue for large scale surveilance.
Or the could just buy that information from the phone companies or data brokers.
Facebook allows anyone to purchase targeted messaging, but it's an American run company which likely makes the difference - there is likely a perception (real or imagined) that FB will be willing & able to combat "enemy" messaging.
In regards to the value of 60 second funny videos to share messaging, part of the strength here may be exactly because it's a platform for short, funny videos; as long as the ideas being communicated can be slipped into a funny video in a subtle way, people are hearing them without even realising it. (e.g. perhaps videos making fun of American leaders, undermining their authority etc. - certainly not hard to do with certain leaders...)
Well, free press also means that you shouldn't be forced to consume media you don't like.
But media you don't like being produced at all is perfectly compatible with a free press, including dislike because of influences and pressures you disapprove of. (In some sense, it would be extremely surprising if you liked all the media being produced.)
Why just videos? Images and text posts could be used in the same way, to disguise messaging/propaganda as a joke etc.
FWIW I don't think Facebook is likely any better in this area than TikTok, or indeed any social network or platform based around user generated content. It's likely always possible to create popular content that subconsciously influences people's thinking.
One thing that might separate Facebook from TikTok (for some people, not necessarily myself) is the question of "do I trust who is serving this content?" - in these days of algorithmically generated feeds, these companies can likely get away with more in terms of inserting specifically targeted content etc. without it looking suspicious, and have people assume it's showing because it's organically popular.
exactly people don't realize the subtle implications from watching tiktoks — they do impact you and your thoughts and in any case China censors so much
> with a crowd who will immediately scroll past anything heavy (or just plain stop using the app) to get to the next funny video?
the same can be said for imgur but it's very effective in presenting political content to a young audience which is as you say scrolling past. what makes it to the top is self regulating (by user upvotes) and still imgur has plenty of political content on the front page (twitter or reddit cut-outs with links to both OC and "deeper reading"). So I can't see why the same wouldn't work on TikTok (oddly one of the videos where a make-up tutorial was shared contained anti-Chinese content and criticism of how Uyghurs are treated - which went viral. Not sure if this is proof but it is certainly evidence that this isn't impossible).
>Surely Facebook’s a bigger issue than 60 second videos since any bad actor can target messaging at susceptibile users and no one will know.
TikTok relies much more heavily on an algorithm than FB, though. FB primarily pushes content posted by your friends, while TikTok pushes anything that might interest you. They can subtly influence you. If they hate Trump, the can push joke videos about tax returns and his sexual assault allegations or something. In other words, try creating an unconscious association between Trump and evil in your brain.
Another issue is that, if TikTok becomes crucial for advertisers, we give China power over these companies. Hiring people who have publicly expressed anti-chinese opinions or donating to the wrong politicians might suddenly ban you from the platform.
> FB primarily pushes content posted by your friends
This used to be the case, but is very very far from the case now in my experience. Most facebook newsfeed content is algorithmically driven too (to the point that it's mostly put me off using facebook and pushed me to instagram where most content is still from people I explicitly chose to follow).
>TikTok relies much more heavily on an algorithm than FB, though. FB primarily pushes content posted by your friends, while TikTok pushes anything that might interest you. They can subtly influence you.
I think it's very naive to think that FB is not doing that.
The advertising angle is very good one. US media and sports companies are already self censoring because of the economic retribution they might face in mainland.
TikTok is the generation that is most malleable to propaganda in a way that can undermine an entire generation. TikTok also has history of recording telemetry that’s extremely alarming.
Look into what is mined. Look into how algorithms get optimized to influence behavior. See a recent paper on this with using social media to shut down a power plant. Memetic warfare is real and it’s been going on in cyberspace for a while now. Every single bit of information is manipulation.
Every social network is a memetic warfare piece and TikTok is a primary target.
Lol at the downvoting. Bunch of brainwashed people here already.
> TikTok is the generation that is most malleable to propaganda in a way that can undermine an entire generation.
You don't have to look far to know this hypothesis won't stand up to any scrutiny. Not only are TikTok users typically in the age range less likely to vote, but everyone is susceptible to propaganda and lies. There is a lot of US produced propaganda out there aimed at the older generations which is being lapped up by a public thirsty for negative partisanship and divisive rhetoric.
If the Tiktok generation has been undermined, then it's by the older generations who benefited from more favorable situations, grabbed the wealth and power for themselves, pulled the ladder up and now refuse to see how any of this is an issue and will fight to maintain an unequal situation as long as it favors them.
Not voting is a behavior pattern. Any behavior can be influenced by information propagation. The behavior of not voting has its own second order of effects. Also rooted in not believing in the structure one is participating in.
But that’s less my point, it’s more about the methods and attack vectors being an increased surface area now than ever before.
“Chinese military scholars argue that their nation has a long history of conducting "psychological operations", a phrase that connotes important aspects of strategic deception and, to a certain degree, what the US Department of Defense portrays as perception management. For example, several articles published by the PLA's Academy of Military Science (AMS) journal Zhongguo Junshi Kexue, examine psychological warfare and psychological operations mainly as a deception-oriented function of military strategy.”
My optimistic side says that if 60 second short videos were ideologically useful, CIA (or your favourite "They") would have weaponised them back in the 1960s[1].
My pessimistic side says they probably are, and maybe they even did.
Most definitely cia has a history with manipulating information. But at the point of information, how do you know what to even trust at any level? Evidence can be provided for anything. Information and disinformation all at the same time. This is only increasing - how will anyone be able to sort themselves out? I’m not so sure it’s possible.
Also 60 seconds of video is a ton of information. And it’s not just one video at a time. The total session aggregated per user per active user is way higher than that. These are behaviors where users use the app all day. Their entire worldview is based on the content within the social network experience.
The Arthashastra recommends never acting on intelligence which hasn't been confirmed through at least three independent channels. Russian, swiss, and US secondary sources agree on the the underlying events of that scenario. I guess if the soviets really had had a highly-placed mole like Stierlitz, they could have done more, perhaps have been subtly encouraging top nazis first to prioritise ideology over pragmatism and then to get into speed, into the occult, and into the bunker.
Nighttime, 1944 Berlin: A ushanka-wearing man with skis and a parachute creeps silently along the hallway of an apartment building. He raps quickly on one of the doors.
A dishelved german in nightcap and house shoes opens after a few minutes.
"The eagles fly over the campfire. Repeat: the eagles fly over the campfire!" whispers the man in the hallway.
"Sorry," says the german, "I'm Otto Stierlitz the plumber, born 1904. You want Otto von Stierlitz the spy, born 1899. He lives up on the third floor."
> if 60 second short videos were ideologically useful
A single 60 second video is not materially useful. But steady repetitive exposure to the same or similar messaging videos, is the foundation of TV advertising. There is likely some useful weaponization in there somewhere when the videos can be made without the assistance of a Madison Avenue crew and contract, and the consumers are pecking at the screen for more videos for long stretches of time like a pigeon in a psychology experiment.
Someone is going to try something to achieve controlled/directed opinion shaping with that kind of mass population behavior in a relatively regulation-free context compared to conventional centralized media. It has to be too tempting a centralized target to leave alone. Opinion shaping in these channels already is being attempted commercially. I'd like to hear from domain experts whether the metrics bear out such efforts actually work in the commercial sector, and if other domain experts in politics and defense think such experience warrants those sectors dabbling in making their own attempts.
What you’re describing is exactly what’s going on. Cambridge Analytics was the smoking gun. That unraveled what sophistication of technology is being used to mine the internet and control sentiment.
I wasn't asking for a citation on whether three-letter agencies have weaponized social media. My issue was the"most malleable" language that belongs in a boomer comic rather than HN.
So why not focus on Facebook? Why not do something meaningful, anything, with a company which is actually beholden to us law? Seems like targetting the biggest bang for buck would be a good place to start and THEN move on to others, no?
It’s not mutually exclusive. Facebook is complicit. TikTok however is specifically a unique entity on its own. The demographics that use it are of critical interest to persuade additionally the data is directly at risk of being used as information warfare on increasingly elevated state cyber warfare tactics.
Follow the trail of information. Look into what’s going in across the stack hardware to software to networks. There’s a massive shift the past couple years.
Facebook has been used for misinformation for a significant longer amount of time. Usually I get annoyed when people say something like "but what about X", but in this case it's a case of one company often being proven to manipulate, distort, etc (Facebook) vs a possibility that another company might do that.
It’s not a possibility that another company is doing it, it’s about the state actors involved in doing it and otherwise unknown parties. In the end it’s no specific company that is responsible as the aggregate whole of all data available and the mechanisms to algorithmically control content is the sophistication of manipulation going on today.
Indeed. Again there’s no question the attack vectors - however dismissing it as “kettle” or “them too” solves nothing and contributes to making the situation worse. Every comment I make is being downvoted yet there’s a critical point that’s either entirely dismissed as insignificant because others are doing it too or divorced from the ability to reason about information warfare and it’s implications. Oh well at least I tried.
That’s not really what’s happening though. People are reading your premise that tiktok is a viable mechanism for influencing behaviour of its users, they are then pointing out far bigger examples of the ability you cite (Facebook, Google, Twitter) to which you say that’s different and we need to focus on tiktok because good reasons and not because china but i cant elaborate on good reasons.
I don’t think they’re any less or any more other than the potential net vectors exposed. A boomer typically has traditional media as a source of information which shapes their world view. These channels have their own programming and narratives.
The zoomers are entirely on internet platforms which has a higher degree of manipulation. Every single person is mined against. Every single person gets optimized against. There’s more room for coercion based on the algorithmic use targeting demographics, and with the capability of individual optimization with machine learning tech.
If the typical “hacker” can launch a botnet with GPT2 and automate conversation and thread demographics based on interactions putting a “group” into a human in loop pattern with a goal ... imagine what state actors are doing.
>If the typical “hacker” can launch a botnet with GPT2 and automate conversation and thread demographics based on interactions putting a “group” into a human in loop pattern with a goal ... imagine what state actors are doing.
It scares me everytime I think about it how little $$ you need in order to reach e.g Reddit's front page.
It’s incredibly alarming what one can do with very little resources.
It’s downright horrifying or exciting when realizing what one can do with more than average resources.
In a way, that’s why I think the battleground is more “weird” than ever. It’s not just state actors, it’s factions of organized and convergent groups using memetic warfare. Some groups naturally converge towards each other. Take the anon’s, and various sects then overlay it with state actor and your smaller groups. Weird nameless stuff of massive influence and self organization.
Depending upon if you're counting daily or regular usage, it looks like somewhere between 50+% (daily) and 75+% (occasional usage) of boomers are on facebook. The number for YouTube are a bit lower but not by much. For comparison, Tiktok, the Gen Z share is about 50%.
That is more than enough boomers being algorithmically swayed to alter an election. As has been shown (Cambridge Analytica, etc...) Facebook's segmentation arguably offers more power to target it's user than Tiktok does.
Not the same at all given the sophistication of targeting allowed.
Yes, ads have always been this. But no, it’s never been like this.
All that data on the internet is being used to train algorithms to sway global sentiment. This emerges as bots, ads, interactions, content. It’s mind blowing.
That’s already been known and within US soil. Now it’s the younger generation - it doesn’t make it mutually exclusive. If anything it shows the complete onslaught on all ends to undermine the stability of a nation in a way that’s never been possible before due to mass media and the internet. Broadcast is entirely unidirectional.
You’re absolutely correct that brainwashing happens on those mediums. It’s entirely interesting on one hand and entirely alarming on another. At this rate I’m not sure what the world at large can do given the technology out there to manipulate the populace. It leads to some very critical questions on how to exist.
This just seems like yet another case of Republican double speak. Say one thing do a different thing that just coincidentally happens to grab power over others.
Also, what is the method of using a 60 second video platform to share messaging with a crowd who will immediately scroll past anything heavy (or just plain stop using the app) to get to the next funny video? It seems a ultra low effectiveness medium for controlled messaging.