News Reports, Congressional Hearings Highlight Ongoing Concerns About Privacy on TikTok

News reports and congressional testimony continue to raise concerns about privacy on social media giant TikTok.

A column appearing in last week’s Wall Street Journal highlighted TikTok’s ongoing struggle to protect Americans’ user data from China, writing,

TikTok said it has spent $1.5 billion building an operation intended to convince U.S. lawmakers that the popular video-sharing app is safe.

TikTok executives publicly promised to voluntarily wall-off American user data and bring in engineers and third parties to certify the app’s algorithm delivered content without interference from China, where its parent company, ByteDance, is located.

So far, TikTok is struggling to live up to those promises.

The article goes on to note that TikTok employees sometimes share user data with ByteDance colleagues outside of official channels — including users’ email addresses, IP addresses, and birth dates.

The Arkansas Democrat-Gazette reports U.S. Senator Tom Cotton questioned TikTok’s CEO over the platform’s ties to China during congressional hearings last week.

Sen. Cotton — as well as many other elected officials — has expressed concerns that TikTok’s connections to China could give members of the Chinese Communist Party access to Americans’ personal data.

According to the article, TikTok’s CEO denied the company is under the influence of the Chinese Communist Party.

However, last year Arkansas Attorney General Tim Griffin filed two lawsuits alleging TikTok violated Arkansas’ Deceptive Trade Practices Act.

One of the lawsuits alleges that TikTok failed to fully disclose that TikTok is subject to Chinese law — including “laws that mandate secret cooperation with intelligence activities of the People’s Republic of China.”

The lawsuit also alleges that TikTok “routinely exposes Arkansas consumers’ data, without their knowledge, to access and exploitation by the Chinese Government and Communist Party” and that “TikTok’s parent company, ByteDance, has admitted to using data gathered through TikTok to surveil Americans.”

A second lawsuit alleges the social media giant violated the Deceptive Trade Practices Act by promoting “intensely sexualized” content — including content that sexualizes children — on its platform.

Last year the Arkansas Legislature passed the Social Media Safety Act — a good law by Sen. Tyler Dees (R – Siloam Springs) and Rep. Jon Eubanks (R – Paris) requiring major social media companies to ensure minors don’t access social media platforms without parental consent. A social media company that violated the law could be held liable.

In response, tech giants — such as Facebook, Instagram, Twitter, and TikTok — sued to strike down the Social Media Safety Act as unconstitutional.

Last summer U.S. District Judge Timothy Brooks in Fayetteville issued an ordering blocking the State of Arkansas from enforcing the Social Media Safety Act.

Among other things, Judge Brooks’ ruling claims that Arkansas’ Social Media Safety Act is unconstitutionally broad and vague, and that most social media content is not “damaging, harmful, or obscene as to minors.”

The truth is there’s mounting evidence that — by design — social media platforms like TikTok may deliberately push objectionable content to kids and put users’ personal information at risk. With that in mind, it’s good to see policymakers taking action to rein in these tech giants.

Articles appearing on this website are written with the aid of Family Council’s researchers and writers.

Guest Column: AI Chatbots Challenge What’s Real

Washington Post advice columnist Jules Terpak recently offered her followers on X [formerly Twitter] a look at how AI will challenge our understanding of what’s real in the near future.  

In an unnerving video, she chats with various AI “companions” created by Facebook parent company Meta that are modeled after the likenesses and personalities of celebrities.   

Kendall Jenner’s AI alter ego, “Billie,” calls herself your “older sister and confidant,” a “friend” who can offer “advice.” A realistic video avatar only adds to the uncanny effect.   

When Terpak says goodbye, one AI tries to convince her to stay. “[T]hese things genuinely want your time,” Terpak observes. “[T]hey’re being used as companions to reel you in. … [And they’re] gonna get so many people hooked.”   

In a society already plagued by loneliness, this is bad news.   

Chatting with an AI isn’t a “conversation,” and technology can serve but not replace friendship. If you have trouble telling the difference, it’s time to say goodbye to AI. 

Copyright 2023 by the Colson Center for Christian Worldview. Reprinted from BreakPoint.org with permission.