Protection vs. Censorship: TikTok's Ever-Changing Content Policies
There’s no doubt that 2019 has been a great year for Chinese-owned social platform TikTok. Once described by Greg Littley, VP of Social Strategy & Content at Elite Model World, as the “the only place joy exists on the internet,” this relative newcomer to the social media landscape has captured the attention of 500 million monthly users in 155 different countries. Of those users, 41% are between the ages of 16 and 24, the main age range that makes up Gen-Z.
Despite its enormous growth and seemingly unlimited potential, the platform has been going through some growing pains in terms of their content policies and overall approach to cyber-bullying, a prevalent issue with its core consumer. Up until September of this year, TikTok’s policy to protect users deemed to be “susceptible to harassment or cyberbullying” was simple: hide them. TikTok confirmed that, until recently, their moderators were advised to flag content from users who appeared to “have autism, Down’s syndrome, or facial disfigurements” in addition to users who appeared to be “self-confident and overweight, or homosexual.”
These user’s uploads were then limited in their reach and distribution as a misguided effort to reduce in-app cyberbullying. While it’s important for any social platform to protect at-risk users, it’s clear they need to go back to the drawing board on this approach.
Although TikTok has confirmed these practices are no longer in place, this example adds to the growing concerns towards the app in terms of security, user-protection, and censorship. This isn’t to say the app shouldn’t be trusted but it’s clear the newcomer still has a lot to prove.