Last Updated on November 23, 2021
Shortly after Kyle Rittenhouse was charged with murder following the self defense shootings he was ultimately acquitted for, Facebook moved to wipe any positive mention of the teenager from the company’s platforms. This included links to Rittenhouse’s legal fund and the sharing of any evidence that proved his innocence, of which there was an abundance.
In order to do this, Facebook likely exploited a loophole that allowed them to skate around their terms of service and selectively moderate content. Internal documents shared with National File by Facebook whistleblower Ryan Hartwig shine light on these practices.
Hartwig worked on Facebook’s content moderation team while employed at a company called Cognizant from 2018-2020 until he eventually blew the whistle after realizing the platform’s content moderation efforts pushed political agendas and punished those who disagree. Hartwig now says he believes he knows the mechanisms Facebook used to purge all positive mention of Kyle Rittenhouse.
According to Hartwig, Facebook most likely branded the Kenosha shootings as a “mass murder”, then used that designation to purge pro-Rittenhouse content under the company’s “Dangerous Individuals and Organizations” policy.
“In an effort to prevent and disrupt real-world harm, we do not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook,” reads the policy rationale.
Facebook will assess organizations both online and offline in order to gauge the likelihood of groups or individuals causing real world harm. Groups that fall under the dangerous organizations policy include terrorist organizations, “hate organizations”, organized crime syndicates such as drug cartels, and multiple-victim murderers.
Though Hartwig is unable to pinpoint Rittenhouse’s particular designation as a violent criminal, as he blew the whistle and left his job before Rittenhouse acted in self defense, Hartwig says the actions taken against pro-Rittenhouse content lead him to believe Facebook made the designation. “I suspect they added Kyle Rittenhouse to that list,” Hartwig said.
One possibility is that Facebook stretched their definition of “mass murderer” in order to include Kyle Rittenhouse. According to the policy, a homicide that results in three or more casualties is labelled a “mass murder” by Facebook. Homicide suspects who fit this criteria can then be subject to enforcement under the dangerous organizations policy. As a result, any Facebook posts that rationalize, defend or praise the actions of the individual is subject to removal.
On the flip side, content that negatively portrays the individual subject to enforcement under the policy, will be permitted.
If one is flagged under the policy, calls for violence against the individual or group will be allowed. Calls to violence are otherwise removed by Facebook, with the only exception being if the target is subject to enforcement under the Dangerous Individuals And Organizations policy.
Since Kyle Rittenhouse likely received this designation, said Hartwig, the company was able to selectively remove or allow content based on how it fit with Facebook’s arbitrary, often partisan criteria.
Users have reported this experience from Facebook since Rittenhouse’s name surfaced in the media. Last year, National File reported that some of Rittenhouse’s defenders were censored citing “nudity or sexual activity” despite the posts featuring neither.
The burden of proof for Facebook moderators to make these designations is actually far lower than what is required by a court of law, explained Hartwig. Content moderators don’t actually have to obtain court documents in order to designate individuals or organizations as violent.
Instead, moderators can use accusations from media reports as justification for enforcement. National File asked Hartwig if the news sources used by Facebook were carefully curated, and Hartwig replied that content moderators simply use the Google search engine, which has been accused of shadow banning conservative sites in its results.
This, explained Hartwig, creates a loophole where partisan Facebook moderators can cherry pick news stories pertaining to their preferred narrative, then use those stories as justification for placement on the dangerous organizations or individuals list.
Not long after the shootings, Rittenhouse’s Facebook and Instagram accounts were terminated while his name was blacked out from search results. They also promptly removed any praise or defense of the teenager’s actions, no matter the evidence being provided.
It is likely that this was a result of Rittenhouse being classified as a violent criminal under the policy despite not fitting the definition of “mass murderer” or belonging to a designated group.
Other dissident right-wing figures — such as Alex Jones and Tommy Robinson — have been subject to similar enforcement. According to Hartwig, the company tends to nitpick what groups and ideologies it labels as extremists.
For example, the company remains “hyper vigilant” in regards to alleged white supremacist groups and associated threats. Meanwhile, Antifa violence was never subjected to the same scrutiny despite documented proof of the group organizing violent demonstrations.
“Facebook’s internal policies are very subjective as to the definition of who is considered a dangerous individual,” Hartwig told National File.
Rittenhouse was acquitted of all charges last week, and has since said his verdict is proof that the right to self defense remains intact in the United States. In his interview with Tucker Carlson, the teen has suggested his lawyers are determining whether to take legal action against politicians, news networks and other organizations that defamed his character.