FOLLOWING VoteWatch’s request for information from Google and Patreon as to why not enough is being done on their platforms to remove individuals cashing-in on Covid disinformation, Google confirmed its increased commitment, and Patreon released a new updated policy.
Through the pandemic, social media influencers have made money through Adsense, websites, and the subscription service Patreon, by spreading unqualified, often dangerous conspiracy theories. These have ranged from false claims about vaccines, to bizarre rants about Bill Gates.
Approaching Google and Patreon with numerous examples of individuals exploiting their platforms to generate large sums of money from spreading such disinformation, both assured us that action was being taken, and in some cases that measures would be improved.
One example reported to both organisations is television personality Emma Kenny, best known for appearing as a ‘psychologist’ on numerous mainstream crime documentaries.
Over the past year, Kenny has focused heavily on promoting anti-vaxxer, anti-science narratives and conspiracy theories to her fans; even using her ‘mental health support group’ to preach her dangerous and paranoid theories to vulnerable viewers.
“There’s always been people behind the scenes pulling strings with money and bribes and wanting things to be done their way” Kenny said on one occasion, trying to weave an odd connection between the past and the current pandemic. “That IS rich white men for you” she then offensively stereotyped again. “They’ve always lied to us. They’ve rarely had our interests at heart…. so you’re awake, and you’re seeing it… and we’re going to pull them back in check”.
Kenny has also been busy retweeting anti-vaxxer comments and accounts almost daily to her 56,000 followers and producing other dangerous videos, including one in which she told her audience that a Coronavirus vaccine ‘really isn’t necessary’, that she would not be letting her children have the vaccine, and promoted the idea that people should refuse to take it.
After us showing them a series of videos containing conspiracy content, Google confirmed that they had ‘reviewed the videos flagged to us and took appropriate action’.
Meanwhile, Patreon went one step further and promised an updated set of guidelines, and that they would be reviewing individual accounts more closely over the coming months.
“Since the beginning of 2020 the world has been united in the struggle to contain and defeat the COVID-19 pandemic” a spokesman told VoteWatch. “In recent months, as governments around the world have begun approving vaccines and other medicines to combat the spread and severity of COVID-19, we have also observed an increase in online activity promoting medical-misinformation.
“We have been inspired by the actions that first-responders and health officials have taken all over the world in order to save peoples’ lives, and want to ensure that Patreon is not supporting efforts to miseducate patrons or the public at-large on COVID-19 prevention, mitigation, and treatment.
“Starting today, Patreon is updating its community guidelines to address content spreading misinformation on COVID-19.
“Patreon will not allow creators that repeatedly use unfounded or debunked theories to argue against broadly supported public health measures on COVID-19. Examples of this include repeated posts that advocate drinking bleach or using UV lights inside of your body as an effective alternative to a vaccine. As always, we will work with creators directly to bring their content back within our guidelines.
“In the weeks ahead, Patreon will introduce fact-checking resources for posts discussing COVID-19 designed to help creators and patrons access trusted information. In addition, our teams will add post-level moderation tools to address each situation with the nuanced response it deserves.
“To be clear, this policy update will not impact creators making content that debates the merits of, or expresses skepticism around, public health recommendations. We will always encourage the broadest dialogue possible amongst creators and their patrons. Our Trust and Safety team will work directly with creators to ensure that their content does not contain unfounded or debunked assertions on COVID-19 that would dissuade the public from taking preventative or responsive public health measures (like a vaccine). If you’re a creator who is unsure of whether or not your activity on Patreon is within our community guidelines, please feel free to send an email to us at email@example.com.
“While the COVID-19 pandemic has brought unprecedented disruption to lives across the globe, we’re hopeful that we may be turning a corner in this crisis. As global health officials and emergency responders work to contain and end the epidemic, we’re optimistic about the opportunities in store for creators in 2021 and beyond. We look forward to continuing our work to keep them safe and secure in a post-covid world.”
Meanwhile, a spokesman from Google told VoteWatch:
“YouTube has Community Guidelines including policies that prohibit COVID-19 content that explicitly contradicts expert consensus from the NHS or the World Health Organization and we removed flagged videos that violate these policies. We also have strict policies that govern what kind of videos we allow ads to appear on, and we enforce these advertising policies vigorously. Videos that promote harmful or dangerous acts or theories are not allowed to monetize.”“YouTube’s Community Guidelines — or content policies — govern what videos are acceptable to post. While our Advertising Policies are separate and detail what guidelines content must adhere to in order to be eligible for monetization.”
“Since February 2020, we’ve removed over 800,000 videos related to dangerous or misleading coronavirus information. We remove things like:
“Content that disputes the existence or transmission of COVID-19, as described by the local health authorities and WHO.
“Content that discourages someone from seeking medical treatment or content that promotes medically unsubstantiated methods to prevent serious illnesses in place of seeking medical treatment is policy violative.
“Content that explicitly disputes the efficacy of WHO / local health authority recommended guidance on social distancing and self isolation that may lead people to act against that guidance.
“YouTube expanded our COVID-19 Medical Misinformation policy to remove claims about COVID-19 vaccinations that contradict expert consensus from local health authorities (such as the NHS) or the World Health Organization (WHO).
“The expanded COVID-19 Medical Misinformation policy includes specific information stating that any content that includes claims about COVID-19 vaccinations that contradict expert consensus from local health authorities or the World Health Organization (WHO) will be removed from YouTube. E.g. claims that the vaccine will kill people or cause infertility.
“In addition to our policies removing content, we’re committed to providing timely and helpful information at this critical time, including raising authoritative content, reducing the spread of harmful misinformation and showing information panels, using data from authoritative sources, to help combat misinformation.
“In October, we updated our hate-speech and harassment policies to prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence. Blog Post. The policy prohibits content promoting QAnon.
“Since June 2019, we’ve removed tens of thousands of Q-related videos, and terminated hundreds of Q-related channels for violating our Community Guidelines. Additionally, when users come to YouTube and search for topics prone to misinformation, like Q-related conspiracy theories, we provide additional context by linking to third-party sources at the top of the search page, and we prominently surface videos from experts or news organisations.”
“In 2018, we established stricter criteria for monetization on YouTube. Previously, channels had to reach 10,000 total views to be eligible for the YouTube Partner Program (YPP). Now channels need to have 1,000 subscribers and 4,000 hours of watch time within the past 12 months to be eligible to earn money from their content, including by ads placement. We have also been actively engaged with the Global Alliance for Responsible Marketing since its inception to help develop industry-wide standards for how to commonly address content that is not suitable for advertising.
We also built better tools to help advertisers select suitable placements for their brands:
We worked closely with our advertising partners on simpler suitability controls that make it easier for advertisers to opt out of content that is deemed unsuitable for their brand, while understanding potential reach trade offs.
We got recently accredited by the MRC.
We have several long standing publisher policies in place to prevent ads from running alongside dangerous or harmful content.
We have long prohibited harmful health claims, including anti-vaxx content from monetizating and expanded our policies last summer to disallow ads on content about a health emergency that contradicts scientific consensus.
Additionally, we consider content promoting QAnon conspiracy theories to be a violation of our monetization policy which prohibits inciting violence.
Our policies are in put in place for many reasons, including ensuring good experience for people viewing our ads, to prevent user harm, and to help make sure that ads follow applicable laws in the countries where they appear.
They are also there to instill trust in our advertiser partners that their ads are running against content that is appropriate. “
Wish Twitter would take the same action.