
On 15 February 2020, the director-general of the World Health Organisation (WHO) said that “we’re not just fighting an epidemic; we’re fighting an infodemic” in relation to coronavirus. An infodemic, according to the WHO, is an “over-abundance of information” which makes it difficult for people to assess what is trustworthy and what is fake.
Since lockdown measures were announced on 23 March 2020, Ofcom has carried out a weekly analysis (now published fortnightly) of how people in the UK are accessing news and information about coronavirus. In its most recent analysis, covering 9 to 16 May 2020, Ofcom found that 43% of respondents said they came across false or misleading information about coronavirus in the last week. In week one of the study, 20 to 27 March 2020, 46% of respondents said the same. The analysis also found the same number of respondents (40%) questioned in both the most recent week and in the first week said that they find it hard to know what is true and what is false about coronavirus.
Newsguard, which rates websites by their trustworthiness has identified 233 websites (as of 28 May 2020) from the US, UK, France, Italy and Germany which are promoting false information about the virus.
What is disinformation?
The Government considered disinformation as part of its online harms white paper, published in April 2019. In the white paper, the Government defined disinformation as “information which is created or disseminated with the deliberate intent to mislead” whilst misinformation is defined as “the inadvertent sharing of false information”.
The spread of disinformation is a global challenge and is not just related to the current health crisis. In January 2020, the WHO listed earning public trust as an urgent global healthcare challenge to tackle in the upcoming year. It said public trust could be undermined by the rapid spread of false information on social media. For the last few years, before the outbreak of coronavirus, the WHO had been working with technology companies such as Pinterest, Google and Facebook to promote reliable, accurate health information.
What are examples of coronavirus disinformation?
Since March 2020, the European External Action Service (EEAS), the EU’s diplomatic service, has been publishing reports detailing instances of disinformation about coronavirus that are being spread across Europe. In its report on 19 March 2020, the EEAS gave an overview of current false narratives. These included:
- coronavirus is a biological weapon deployed by China, the US, the UK or Russia;
- coronavirus is linked to 5G technology;
- coronavirus is a hoax; and
- claims of natural remedies being used to cure the virus.
Ofcom’s analysis found that the most common theories circulated in the UK since lockdown measures were introduced were those linking coronavirus to 5G technology. It has been reported that there have been over 90 attacks on 5G phone masts in the UK since lockdown began.
What is the Government doing?
On 30 March 2020, the Government said that its Rapid Response Unit would be used to tackle false information related to coronavirus. The Rapid Response Unit was set up in early 2018 and works across government departments to counter all types of false information found online. The government confirmed that the Rapid Response Unit would respond to instances of false information identified by a newly established Counter Disinformation Cell, overseen by the Department for Culture, Media and Sport (DCMS).
As part of its March announcement, the Government stated that when false coronavirus information is identified, action from the Rapid Response Unit could include:
- a direct rebuttal on social media;
- working with platforms to remove harmful content; or
- ensuring public health campaigns are promoted through reliable sources.
In addition, the Government relaunched its ‘Don’t Feed the Beast’ campaign in April 2020 which encourages people to interrogate information to make sure it is true before they share it.
What are social media companies doing?
On 17 March 2020, technology companies such as Facebook, Youtube, Twitter and Google issued a joint statement saying that they were working together to combat misinformation. Strategies used by these companies to tackle false information include: using fact checking services and de-prioritising content that is flagged as false; removing content that has an imminent threat to harm, for example encouraging the use of bleach as a cure for coronavirus; and alerting people when they have interacted with a post (commented or liked) that has since been removed.
On 25 March 2020, the DCMS sub-committee on online harms and disinformation opened an inquiry into fake news and coronavirus on social media sites. On 30 April 2020, the committee took evidence from senior figures from Facebook, Twitter and Google. The chair of the committee, Julian Knight, said that the committee “were very disappointed by the standard of evidence given” by the social media companies at the session. The chair has since recalled the three companies to Parliament and has requested that the companies send “senior executives who have knowledge of their policies” to a future meeting on the topic.
What next?
Experts have expressed concerns that false information promoted by the anti-vaccination movement may undermine efforts to immunise populations against coronavirus, should a vaccine become available. In 2020, the WHO linked the anti-vaccination movement to a rise of deaths in preventable diseases across the world. In the UK, a survey conducted by ORB International in March 2020 suggested that 80% of respondents would be willing to have a vaccination against coronavirus.
Image by Thomas Ulrich from Pixabay