Last Updated on December 5, 2019
Popular social networking app, TikTok–which has garnered more than 1 billion downloads and whose individual videos reach up to 5.5 million people–has been placed under fire for suppressing content created by obese, LGBT, disabled, autistic, and other creators at a higher risk of cyberbulling.
According to Slate, examples of users “susceptible to bullying or harassment,” the policy listed people with facial disfigurement, autism, Down syndrome, and “Disabled people or people with some facial problems such as birthmark, slight squint and etc.”
The social networking app made the stunning admission after a German site, Netzpolitik, found that TikTok moderators were asked to watch 15-second videos to see if certain content creators were more prone to bullying than others.
According to Netzpolitik:
Recognizing autism based on 15 seconds of video
The rules cause irritation on a very practical level: How is a moderator supposed to recognize whether someone has a disorder from the autistic spectrum based on 15 seconds of video? This instruction is one of several incomprehensible rules that are as confusing to the moderators themselves as to outsiders, our source at TikTok told us.
Even more fundamentally, however, the directive shows ignorance of the debates about the visibility of people with disabilities in the media. These debates have been led over the past few years – driven mainly by the people themselves.
While activists are calling for a barrier-free Internet and visibility, TikTok has deliberately put barriers in place – without those affected suspecting anything.
If the creators were deemed to fit the “susceptible to bullying or harassment” criteria, moderators were instructed to flag these accounts as vulnerable.
These flags, given by administrators, were designed to stop these videos from being aired to audiences outside their home countries.
In some cases, their videos wouldn’t show up on some foreign users’ feeds–entirely.
Netzpolitik dredged up a list of flagged users, including creators with and without disabilities, whose user bios contained hashtags such as #fatwoman, #disabled or if they featured LGBT rainbow flags or anything associated with minority groups or activist movements/slogans pertaining to those groups.
In an age of diversity and inclusion–especially in a technological age–faux pas’ such as these, in spite of their intentions, can be perceived as malicious.