TikTok, Data Privacy, and Social Media Changes
Intersections of security, freedom, and responsibility
No matter how you feel about TikTok, or social media in general, the government writing a law that specifically targets one company to ban an app for all Americans is a big deal.
If the ban happens, it will be unlike anything we’ve seen before.
Have social media platforms ended (occasionally abruptly) in the past? Yes. Vine’s end, for example, was announced in late 2016 and went into effect in early 2017.
But for the most part, once-popular social media apps fall out of favor due to business decisions, changing public interest, or competition from Facebook.
A forced closure/ban of a popular app used by millions of 170 million Americans hasn’t happened before.
What makes TikTok unique, beyond its algorithm, is that it is a near perfect example of “right time, right place”.
Although TikTok was incredibly popular with teens in 2019, it may have never gone beyond that if it wasn’t for the start of the COVID pandemic. Suddenly, many people were at home with a lot of time on their hands and a need for distraction.
Where LinkedIn was professional, Facebook was for friends and family, and Instagram was still highly curated and polished - TikTok brought a fun, new energy to social media and a way of connecting with strangers at a time when many were feeling isolated.
Even better, the barrier to entry was almost zero - turn on the camera and start talking, teaching, dancing, lip-synching, or whatever else was popular at the moment.
Fast-forward five years, and TikTok has become a powerful player in social media - not just in users, but also in influence.
Now, with a potential ban going into effect in just a few days (on January 19th), we may be looking at the biggest shift social media has ever seen.
A TikTok ban will have both financial and cultural impact
Since it gained popularly, TikTok has had a huge economic impact in the US.
From small businesses to TikTok shop to affiliate marketing to influencer marketing to creators being paid as part of TikTok Creator Fund, and probably several other avenues I’m not thinking of - TikTok generates income for millions of Americans.
According to TikTok, there are more than 7 million small businesses on the app who contribute $24B to the GDP and employ nearly 250k people.
There are countless stories of small businesses gaining traction, GoFundMe's being met, and brand-new careers being established, all because of connections and reach someone had on TikTok.
BookTok is probably one of the most famous, but not only, examples of this economic impact.
Independent bookstores (and big box stores, too), authors (both traditional publishing and indie), libraries, and readers have all benefited from the popularity of BookTok and its ability to connect with readers and drive sales.
TikTok’s algorithm is incredibly adept at connecting users with content that interests them, so they are suddenly connected with multiple people reading or talking about the same book, genre, or trope. It is like having the ability to join a bustling corner of conversation whenever you want.
Intertwined with the economic impact, TikTok has also had a massive cultural impact in the US.
While many users joined during the darkest days of the COVID pandemic and found community and connection, it has also been a discovery vehicle for books, music, and art as well as history, news, education, and political activism.
Even Meta’s TikTok copycat, Instagram, benefits from TikTok trends that eventually end up on that platform weeks or months later.
If the ban happens, the economic impact of millions of Americans losing a way to earn extra income (or their entire business), the lack of discovery and reach for small businesses, and the lost sales for bigger brands could be catastrophic.
Who benefits from a TikTok ban?
TikTok has been pulling users from Instagram, Facebook, and YouTube since it gained popularity in early 2020.
In the online world, users means eyeballs and attention for advertisements which is how platforms like Meta and YouTube make money.
To compete with TikTok (and win back eyeballs), Instagram and YouTube both released short-form video functionality on their respective platforms.
Reels on Instagram was released in late 2020 and YouTube Shorts was released in the US in early 2021.
At this point Meta, which also owns Facebook, Instagram, and WhatsApp, almost has a monopoly on American social media platforms.
They got there by having a long history of purchasing potential competitors as well as companies that had technology they wanted to integrate into the Facebook platform.
Instagram and WhatsApp are surviving stand-alone apps under the Meta umbrella, but Facebook also purchased FriendFeed, Friendster, and many others over the years.
However, Facebook’s attempt to purchase Musical.ly (at the time a lip-synching app, but which would eventually become TikTok) in 2016 didn’t happen. Musical.ly was eventually sold to ByteDance.
Since buying and absorbing (or shutting down) TikTok into Facebook wasn’t an option, Meta hired a PR firm in 2021/2022 to smear the app. They planted fake stories about concerning TikTok trends, hyping the danger of a foreign owned app, and children being put at risk. They also worked on creating a positive overall narrative about Facebook.
When Vine closed down in 2017, there wasn’t a clear place for users to move to not only in terms of audience and momentum, but also in format. Although some Vine creators moved to YouTube, it wasn’t an easy shift and not all found success since YouTube, at that time, was all long-form content.
However, if TikTok shuts down, Meta and YouTube would be the logical platforms to move to since they are the next most established destinations for short-form video and the most likely place TikTok users (both creators and audience) will go.
Data Privacy and Protection
Part of the government’s argument against TikTok is that it is scraping user data, which makes it a national security risk.
I think it is disingenuous to treat TikTok’s data collection differently than Meta, Google, and other apps when they all collect similar data and all do nefarious things with the information they collect.
Unfortunately, Americans don’t have comprehensive data privacy protections at the federal level the way other countries do. The best we have is a patchwork of state-level protections that are kind of a mess.
As far as protecting user data, none of the tech companies are good at it.
Meta has a history of big data leaks that impact Americans.
As does Twitter.
As does Google.
Most big companies have issues to managing user data and privacy properly.
Even worse, there are reports of active spying on users from social media platforms, streaming services, and others.
Going beyond social media, even our medical records, which should have even higher standards of protection, are not adequately taken care of.
And our online shopping, social media app, and website visits are tracked, packaged, and sold (and resold) to companies whose job it is to sell us more things. This information can also be sold to anyone else who wants to learn about what we buy, how we shop, where we go, and other aspects of our day-to-day life - and this is all from American companies.
As a marketing professional, I’ve always said that if users really understood how much of their data was collected and how it was used, they’d log out, burn their laptops, and live off-grid in a yurt.
The best case scenario of all of this tracking and selling of your private information is that some company wants to sell you shoes, and they really want to make sure you see their ad (across several websites). And because of our lax data privacy laws, even when we tell them not to track us, they often do anyway.
But it gets darker and more invasive quickly and, as users, other than logging off completely which isn’t practical for nearly everyone, our options to protect ourselves are limited.
TikTok/ByteDance isn’t a perfect company and I’m sure they are also packaging and reselling user data in multiple ways - just like American companies.
Influence and Manipulation
Social media users have always been susceptible to fake news and misinformation (and before social media, it was email chains). The ease and speed at which we can reshare and comment on posts with what is essentially global reach (if our profiles are public) is an issue.
Even if our reach is much smaller, it makes sense that if friends and family see our posts, they may assume it is true because it is coming from someone they know, or they think we’ve done the research before sharing.
According to this study from 2022, more than half of participants reported sharing news stories without verifying the facts.
Beyond that, the growing mistrust of “mainstream news” means users are going to their own trusted sources, often on social media, which likely have a political slant of some kind.
I don’t know that unbiased news exists anymore. For example, Sinclair Broadcasting Group, a conservative media company, owns over 200 local news stations and once required all of its anchors to read an identical script (its creepy).
Jeff Bezos, founder of Amazon, also owns the Washington Post and recently killed an editorial cartoon critizing his relationship with Trump. He also blocked the paper’s endorsement of Kamala Harris in the 2024 presidential election.
When it comes to misinformation and manipulation on social media, it is incredibly easy to do.
There’s evidence that a Russian-backed media company paid right wing influencers millions of dollars in 2024 to make videos designed to spread division on issues like immigration, foreign policy, and other topics. Unlinke the Sinclair Media Group video above, the scripts weren’t identical, but the talking points all had the same message.
Russia was also accused of creating fake news sites in 2024 with the same goal.
There are reports going back to the 2016 election about Russians using social media to attempt to influence the election by encouraging users to not vote, creating rumors about voter fraud, and more.
Was this all possible before social media?
Yes, but all social media platforms have an issue with managing fake news and misinformation.
They should all be held to a higher standard because of the inherent trust (earned or not) many users put in the platforms.
So it is somewhat ironic that the week the Supreme Court held hearings about the potential TikTok ban is the same week Mark Zuckerburg, head of Meta, announced that they would be removing fact-checkers from the platform.
I don’t know what the long-term solution is, and I don’t know that anyone in power has the political will to make changes, but focusing on a single platform isn’t the answer to protecting Americans data privacy.