Can someone please explain to me what the security threat is? I still haven't understood it.
If it's that China will know more about Americans through data ingestion, they can already do that. Americans willingly upload every little bit of their lives to YouTube, Twitch, and tons of other platforms. Not to mention that Temu is the most downloaded shopping app in the US. And what kind of attack can TikTok information provide that other publicly available online information scraped from other social media sites cannot?
If it's that China can show American citizens a curated list of topics and videos to make Americans think certain ways or vote certain ways, then I absolutely think that's a 1A problem. That's the exact same rhetoric tons of repressive Communist countries used and still use to deny people the freedom to read books from outside, watch movies from outside, read newspapers from outside, etc. And I'm never going to accept the idea that information (even full on lies) so dangerous that people shouldn't be allowed to see them.
I *think* the main national security argument is that TikTok can tweak the algorithm to incite discord and inflame conflicts within the US. They can do this far more effectively from their own platform than by doing it on other social networks via bots and fake accounts.
I tend to come down with the same opinion as you though. It's all still essentially a first amendment issue and the negatives I listed aren't going to disappear if TikTok goes away.
Yeah, unfortunately "inciting discord" and "inflaming conflicts" is very well protected by the First Amendment. It's the entire negative consequence of truly allowing free speech (not counting movie theater fire exceptions). I guess we're gonna throw that out all on a whim in 2 weeks...
> Can someone please explain to me what the security threat is? I still haven't understood it.
Facial recognition database and tracking people at a global scale.
With advances in computing power and unsupervised machine learning, correlating a person with the person's surroundings is now largely or fully automated – what used to take forensic experts weeks, months or years to hunt down a person by poring over indirect clues in photographs or CCTV camera footage, is now almost instantenous.
Remember, TikTok is not just topless teenagers twerking on the camera, it is used by people to take short videos at random locations (and the locations are already easify to classify and account for), so if you happen to be a dissident (or a person of interest in general) hiding from the CCP, and you happened to have walked past a TikTok user shooting a short video and you got in the shot, your identity is matched up and your current location is revealed automatically, in near real time or with a short delay.
I have almost no doubt that people in TikTok videos are already digitised, and their faceprints are sunk into a gigantic database behind the Great Firewall. It is easy to project what other kinds of nefarious abuse are possible when such datasets are available.
> With advances in computing power and unsupervised machine learning, correlating a person with the person's surroundings is now largely or fully automated – what used to take forensic experts weeks, months or years to hunt down a person by poring over indirect clues in photographs or CCTV camera footage, is now almost instantenous.
I don't know of any system capable of doing what you describe. But if such a system existed it could just use publicly available videos from Facebook and Instagram. It wouldn't need its own social media platform to feed it data.
Such a system would be built on orders of the state and it would not be publicly or widely advertised, esp. in mainland China. Russia, for instance, has built SORM (deep packet introspection of the ISP traffic at the nationwide scale) – to monitor all internet activities of its citizens (on demand, not constantly), and its technical architecture is not widely known.
Having a direct feed of videos being uploaded from the user's device is also advantageous as the higher resolution of the original video will provide more details before it is recomompressed for long term storage in a smaller resolution. Most importantly, Instagram and Twitter won't allow uncontrolled access to the content they host whereas a Chinese company simply does not have a choice and has to serve the content up on its state demands.
Until proven otherwise, it is safer to project that such a system exists.
Deep packet introspection is a completely different problem from the kind of facial recognition of people in video backgrounds that you envisage. Videos uploaded from cell phones are already MPEG-4 compressed and don't need re-encoding to be served. While it is true that social media companies don't allow unfettered access to their data, server farms for scraping web sites are a dime a dozen. I don't agree that it is safe to assume that systems there is no evidence for exists. It's one step removed from Chinese mind control viruses.
I don't think China are going to find their Dissident in TikTok videos any time soon. The false positive rate on even the best facial recognition is way too high for tracking individuals globally.
Why is a facial recognition database dangerous though? The vast majority of Americans publicly upload pictures and videos of their faces online in a way that it would be trivial for any nation state to gather them up.
Everything you've pointed to can be done already by mining the publicly available treasure trove of YouTube Shorts, Instagram Reels, etc. A pretty small team can scrape these sites and do what you're insinuating.
And lastly, I still don't see the danger of releasing this info? I mean, can't any foreigner travelling on a tourist visa record public vistas for 30 days and essentially do the same thing?
That a foreign nation with aims not aligned with the US has the dominant social network with the CAPABILITY to use this influence to influence the US, collect data, shape the minds of youths and future voters etc etc. that is enough for me.
>the CAPABILITY to use this influence to influence the US
But this is communist era logic used in some countries to this day to ban "foreign" and "corrupting" materials. Forgive me if it smells like repression and tastes like repression, I think it is repression. Shaping the minds of youths is not the exclusive purview of the US political class. That's some 1984 logic...
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances"
"or of the press" sounds like it applies here but does 1A covers also foreign companies?
If it's that China will know more about Americans through data ingestion, they can already do that. Americans willingly upload every little bit of their lives to YouTube, Twitch, and tons of other platforms. Not to mention that Temu is the most downloaded shopping app in the US. And what kind of attack can TikTok information provide that other publicly available online information scraped from other social media sites cannot?
If it's that China can show American citizens a curated list of topics and videos to make Americans think certain ways or vote certain ways, then I absolutely think that's a 1A problem. That's the exact same rhetoric tons of repressive Communist countries used and still use to deny people the freedom to read books from outside, watch movies from outside, read newspapers from outside, etc. And I'm never going to accept the idea that information (even full on lies) so dangerous that people shouldn't be allowed to see them.