New Analysis from @SMLabTo Shows How a Meme is Being Used as a Weapon Against Expertise
Post by: Anatoliy Gruzd and Philip Mai
In the midst of a global pandemic, conspiracy theorists have found yet another way to spread dangerous disinformation and misinformation about COVID-19, sowing seeds of doubts about its severity and denying the very existence of the pandemic.
Since March 28, conspiracy theorists…aka “coronavirus deniers” have been using the hashtag #FilmYourHospital to encourage people to visit local hospitals to take pictures and videos to “prove” that the COVID-19 pandemic is an elaborate hoax.
The premise for this conspiracy theory rests on the baseless assumption that if hospital parking lots and waiting rooms are empty then the pandemic must not be real or is not as severe as reported by health authorities and the media.
Of course, in reality, there is a simple explanation for why some hospital parking lots and waiting rooms might have been empty. As part of pandemic planning, many hospitals have banned visitors and doctors have had to postpone or cancel elective and non-urgent procedures to free up medical staff and resources. This is in keeping with expert advice from the CDC and other health authorities.
In addition, as a way to slow the spread of the virus and prevent cross infections with non COVID-19 patients, the CDC also recommended that healthcare facilities create separate intake and waiting areas for coronavirus patients and reserve emergency areas for emergencies such as heart attacks and broken arms. Furthermore, with the lockdown, fewer people are exerting and hurting themselves, which has resulted in fewer heart attacks, and stroke visits to the emergency department.
This empty-hospital conspiracy theory joins a parade of false, unproven and misleading claims about the virus that have been making the rounds on social media including allegations that 5G wireless technology somehow plays a role in the spread of the COVID-19 virus or taking cocaine and drinking bleach might prevent or cure you of the virus.
The Birth of a Meme
At the Ryerson University Social Media Lab, some of our research investigates how misinformation propagates across different social media platforms. One of the first steps when examining trending topics on social media is to look for signs of “social bots” — social media accounts designed to act on Twitter and other platforms with some level of autonomy — and “coordinated inauthentic behaviour” which may include coordinated activities that attempt to artificially manipulate conversations to make them appear more popular than they are.
These two forms of social manipulation, when left unchecked, can skew the conversation, manufacture anger where there is none, suppress opposition or dampen debate. These tactics may undermine our ability as citizens to make decisions and reach consensus as a society.
This new conspiracy campaign against the media and public health officials, with hospitals and medical staffs caught in the middle, started on March 28 with a simple tweet by a Twitter user @22CenturyAssets posing a question: “#FilmYourHospital Can this become a thing?”
For our analysis, we collected a sample dataset consisting of nearly 100,000 #FilmYourHospital public tweets and retweets posted by 43k public accounts on Twitter from March 28, the beginning of this campaign, till April 9.
Our analysis suggests that while the #FilmYourHospital campaign on Twitter is full of misleading and false COVID-19 claims, most of the active and influential accounts behind it don’t appear to be automated. However, we did find signs of ad hoc coordination among conservative internet personalities and far right groups attempting to take a baseless conspiracy theory and turn it into a weapon against their political opponents.
Importantly, we found that while much of the content came from users with limited reach, the oxygen that fueled this conspiracy in its early days came from just a handful of prominent conservative politicians and far right political activists like @DeAnna4Congress, @realcandaceo and @DonnaWR8. These power users used this hashtag to build awareness about the campaign and to encourage their followers to film what’s happening in their local hospitals. After the initial boost by a few prominent accounts, the campaign was mostly sustained by pro-Trump supporters, followed by a secondary wave of propagation outside the U.S.
You can learn more about how we arrived at these conclusions, including how and what data we collected in the Method and Analysis section below.
In normal times, outlandish conspiracies like this might make us shake our heads, but as COVID-19 cases continue to stalk the hallways of nursing homes in Canada and fill beds in New York hospitals, it is harder to ignore such upsetting conspiracies from the dark recesses of the internet. The rise of this conspiracy from a single tweet reminds us that is while the spread of misinformation can be mitigated by fact-checking and directing people to credible sources of information from public health agencies, false and misleading claims that are driven by politics and supported by strong convictions and not science are much harder to root out.
Method and Analysis
Research Tools and The Dataset
We used Netlytic to collect and analyze data, Gephi to visualize the resulting communication network over time, a python library called Twarc to check if an account had been deleted or suspended by Twitter, and Botometer API to assess if an account is automated (exhibiting a bot-like behaviour).
In total, we’ve collected 99,039 posts contributed by 43,461 unique users during the studied period. As seen in Figure1, while the campaign is still active on Twitter at the time of writing this post (April 21), the hashtag got the most attention on March 31, with a gradual decrease in engagement ever since. As with most Twitter-driven, clicktivism-style campaigns, the majority of interactions are retweets (83%).
Method: Social Network Analysis
To study how information and misinformation propagates through different online networks, we use a method called Social Network Analysis (SNA). SNA is a powerful technique to visualize and examine user interactions at scale. To analyze Twitter data using SNA, we first need to represent it as a network, where nodes represent Twitter users (individuals or organizations) and connections between nodes (called ties) represent interactions among users (reply, retweet or mention). For instance, a single tweet posted by Donald Trump to the @JustinTrudeau and @WhiteHouse accounts can be visualized as a simple network with three nodes, and two ties going from Trump’s account to the two accounts because both are mentioned in the tweet (Figure 2).
Figure 2: Discovering a Network Structure, One Tweet at the Time
Figure 3 below depicts how 79k interactions among ~42k Twitter users who posted or retweeted using #FilmYourHospital appear as a network. This network excludes “isolated” accounts if they have not interacted with any other nodes.
Different colors are assigned automatically to highlight densely-connected groups of nodes (clusters) that are more likely to interact with each other than with the other accounts in the network. [An interactive version of this network visualization is available here]
The Evolution of #FilmYourHospital Network over Time
To understand how this particular campaign manifested on Twitter, let’s examine the formation of this network over time. Figure 4a displays #FilmYourHospital interactions posted during the first three days of the campaign. Notably, one of the most influential users who triggered the viral spread of this misinformation campaign was @DeAnna4Congress, a verified account for DeAnna Lorraine, a former Republican Congressional candidate who recently ran against Nancy Pelosi for the U.S. House California District 12. Unlike the anonymous poster who started the hashtag (attracting only around 30 retweets), Ms. Lorraine added the legitimacy to this campaign and directly asked her 150k+ followers to “get #FilmYourHospital trending” and “[p]ost pics of ur hospital here!”.
During the subsequent two days (Figures 4b and 4c), the network “lit” up with mostly retweets of tweets posted by prominent conservative commentators and political activists like @realcandaceo (with 2M followers) and @DonnaWR8 (with 100K followers), an account with a fan at the White House.
Below is a video showing changes in the network over time:
#MAGA All the Way – Accounts Behind The Retweets Storm
While influential conservative politicians and activists are behind some of the most shared and early content in this network, the question remains about who is behind the retweets that formed the majority of interactions and sustained the network passed the initiation stage. To answer this question, we examined the most active accounts in this network and the most frequently used keywords in the account bios (see Figure 5). The majority of users who posted a tweet using this hashtag self-describe themselves as Trump supporters and used words and hashtags such as #MAGA (Make America Great Again), #KAG (Keep America Great), and #trump2020.
And while the majority of users appear to be Trump supporters from the U.S., seven days after the initial #filmyourhospital tweet, on April 3rd the hashtag went international when a new cluster of users from Brazil emerged. It is shown as a green-color grouping of nodes in the top corner of the network visualization (Figure 6). Most of the tweets in this cluster were around popular conservative commentators and activists from Brazil such as @allantercalivre, Allan dos Santos – a self-described businessman, journalist, blogger and a prominent pro-Bolsonaro supporter. While interesting, the spread of this campaign in Brazil is not surprising. U.S.-style politics has been creeping into the Brazilian society over the past few years, culminating with the election of Jair Bolsonaro, a far-right candidate during the 2018 Presidential Election (more on this here and here).
Aside from ~15k tweets in Portuguese (presumably from Brazil) and ~73K tweets in English, there were tweets in 33 other languages, suggesting that the campaign received some international attention. The second and third largest non-English clusters were users tweeting in Arabic (1,616 tweets) and Japanese (515 tweets), but both of these clusters were much smaller and are not as interconnected as the Brazilian one.
Suspended Accounts and “Inauthentic” Behavior
Considering the controversial nature of this campaign, some called in question whether it is an organic campaign or is it possibly being driven by a coordinated action or what Twitter calls an “inauthentic” behavior. To examine this further, we automatically checked each account and tweet in our dataset to see which of them were deleted/protected by a user or suspended by the platform.
Since Twitter has much more information about its users than what’s available via its public API (such as users’ phone numbers and IP addresses), knowing what accounts were suspended by Twitter a few days after they joined the campaign gives us an estimate of how many suspicious accounts might be trying to amplify the campaign. An account can be suspended or temporarily restricted if it’s in violation of the Twitter community rules and standards. Some of the reasons for suspension include:
- artificially amplifying or suppressing information;
- interfering in elections;
- sharing synthetic/manipulated media which may cause harm; or
- promoting violence against, threatening, or harassing an individual or a group of people.
In total, we found that about 5% of tweets were either deleted by users or disappeared because they were flagged by the platform (or other users). While there are a number of reasons why individuals may choose to delete their own “regrettable” tweets (such as the presence of negative sentiment, cursing, or content related to sex, alcohol, drugs, violence, race or religion), we were more interested in those tweets that disappeared because their posters were suspended by Twitter, which would be an indication of an account violating the community norms as established by the platform.
From our analysis, as of April 17, we identified 1,059 accounts in the network that were either suspended or temporarily restricted by Twitter. We then used this information to highlight the corresponding nodes and interactions in the network visualization in red (see Figure 7). Based on the visualization, while temporary or permanently suspended accounts appear to be spread out across the network, they are especially concentrated in the top right corner. This is interesting because this is the part of the network that was especially active during the second period of the campaign.
Bot or Not?
The identification of suspended accounts helped us understand the prevalence of potentially inauthentic behaviour or coordinated effort to amplify a campaign. To check whether the accounts that contributed to this hashtag campaign are automated or not (aka bots), we used a machine learning tool to classify accounts as likely bots or not. While there are multiple approaches to detecting social bots on Twitter, none of them are perfect. Bot detection is a game of cat and mouse, as the bots makers are constantly evolving, becoming more sophisticated and finding new ways to bypass detection. As a result, what’s presented in this section is still very preliminary.
For this exploratory analysis, we relied on a third-party tool called Botometer to check a sample of 1,213 accounts (~2.79%) that shared at least 10 or more tweets in the dataset, including 28 accounts that posted over 100 tweets. The reason for focusing on this sample is because of how active these accounts were relative to the other accounts in the dataset.
Botometer assigns each account a Complete Automation Probability (CAP) score between 0 and 1 to determine how likely an account is automated (0=not likely to 1= highly likely). We used the language-independent version of the provided CAP score since some accounts did not tweet in English. These scores are assigned based on a number of factors, such as when the account was created, how often it posts and who else it is connected to it on Twitter. Figure 8 shows the overall distribution of botometer scores. Out of 1,213 accounts tested, we couldn’t retrieve scores for 26 accounts because 10 were suspended by Twitter, 10 were deleted by the user, and 6 became “protected” accounts.
We consider any account with the CAP score 0.3 or above as potentially automated. The threshold of 0.3 was chosen based on our empirical observations from our BotsWatch dashboard that uses the same Botometer API to identify potential bots in public COVID-19 related tweets in general. Zhang et. al. has recommended a similar threshold of 0.25 (2019, Appendix X).
In total, the majority of accounts (1,157, 98%) had a Botometer score below 0.3. Only 30 accounts had the score equal to or above 0.3. This suggests that the majority of the most active accounts involved in this campaign appear to be human-like in their behavior, with only a small number of accounts exhibiting bot-like behaviour.
One of the most terrifying aspects of COVID-19 for some of the sickest patients is the prospect of dying alone without a loved one nearby due to stringent safety and isolation protocols that had to be put in place to slow its spread. Tragically, these prudent measures are being decontextualized exploited by bad actors and coronavirus deniers to peddle dangerous disinformation and to encourage people to behave in ways that undermine the work of public health officials and put the public and front-line medical workers at risk.