Open Letter — Responses from Social Media platforms

7 min readAug 19, 2021


Firstly, we would like to say thank you to the 400 brands, agencies, civil society groups and individuals that signed the open letter CAN wrote in response to the dreadful racist abuse directed to Marcus Rashford, Jadon Sancho and Bukayo Saka.

The letter was one of solidarity. The work done over the years to confront racism in football by football bodies such as Kick it Out, the PFA, Show Racism the Red Card, individual footballers and campaigners has been incredible. These are the people that have done the hard work over decades to kick racism out of football and society. This letter aimed to support these efforts, to shine a light on areas we felt the social media platforms needed to do more and more quickly.

We’ve talked to all the platforms mentioned in the letter, and here are their responses to our 4 asks:

  1. We asked: Publish updated hate speech policies, that include the use of emojis, to support your zero tolerance approach.

Snap told us that their Community Guidelines have always applied to all content on Snapchat, including emojis — but they are updating their Community Guidelines to provide extra clarity that these apply to include the use of emojis.

“We have listened to feedback that we have received over the last couple of weeks, including from the CAN, and will be making a change to our Guidelines, just to make extra clear that these apply to all content on Snapchat, and explicitly including emojis.”

We welcome this action from Snap.

Facebook have said “Facebook doesn’t allow attacks on people based on their protected characteristics, which includes race, religion, nationality or sexual orientation. This includes hate speech in written or visual form, which includes emojis. Emojis fall under the visual form of hate speech, which is named in the policy here.”

However, it does not appear that they are going to update their Facebook Community Standards or the Instagram Community Guidelines to include specific reference to emojis.

Twitter have told us that they appreciate the reasons behind why this is being called for and are having internal conversations about how best to most impactfully develop them.

Conclusion: We are still convinced including a specific reference to emojis in each platform’s policies is an easy fix and will continue to engage with them on this.

2. We asked: Advertise your zero tolerance approach directly to users.

It is worth elaborating on this ask. The Chief Constable Mark Roberts, National Police Chiefs’ Council Football Policing lead said on the 4th August:

“There are people out there who believe they can hide behind a social media profile and get away with posting such abhorrent comments.”

Social media users do not read through all the policies and community guidelines before posting on social media, nevermind something racist. The CAN ask to the platforms was to bring the community standards and policies out from where people have to search for them, and create an advertising campaign to centre them. Increased awareness of the policies and the possible consequences if those policies are broken would act as a deterrent, and communicate clear boundaries of acceptable behaviour.

Snap believes there is merit to the idea of a joint campaign with other platforms and perhaps an organisation like Kick It Out.

Facebook & Instagram said they are ‘determined to do campaigns’ in this area and that they are in ‘close discussions with other platforms and the football bodies about what further campaigns we can run as part of the Online Hate Working Group’. They highlighted existing existing promotions with Kick it Out on Take a Stand as well as with BT Sport, Arsenal and UEFA to raise awareness of the tools available that can help combat online abuse

They also shared recent blogs that announce tougher action when people break rules on Instagram Direct Messages, their Hidden Words feature launched in the UK in April that allows people to filter out specific words, phrases or emojis and that they will “show stronger, more prominent warnings when someone tries to post a potentially offensive comment, even for the first time”

Twitter have told us that they appreciate the reasons behind why this is being called for and are having internal conversations about how best to most impactfully develop campaigns.

Conclusion: The most powerful action would be if all platforms came together with the football community to run a high profile advertising campaign focusing on highlighting a ‘zero tolerance’ to hate speech, and the consequences of being found to post, or share hate speech. While each platform is different and each policy is also different, CAN believes there is enough common ground to create a successful campaign which challenges the perceived lack of accountability described by the Constable. While we understand that this is a complex thing to create, it is disappointing that a commitment has not been made by the start of the football season.

3. We asked: Enforce your policies and report racist abuse to the police, employers and relevant football clubs as a crime.

We believe that racist abuse directed online should be subject to the same laws as racist abuse offline. We also believe that people that racially abuse others online should be subject to the same bans from football grounds as if they shouted racist abuse in the grounds.

Having consulted with platforms and the industry, we believe that we were wrong to ask for all possible racist abuse to be reported to the police by the platforms. We always recognised that it is not for the platforms to decide what is and what isn’t a crime. That is for the police. However, we do understand that the police currently may not have the resources to deal with the volume if posts that contain racist content or a possible hate crime were automatically passed to them. Our intention was not to suggest overwhelming police capacity, but it was to call for accountability if a hate crime has been committed.

Conclusion: We acknowledge the progress made since the final and arrests that have been made. We note and support the comments made by Chief Constable Mark Roberts, who said on the 4th August that the investigation was proceeding “at pace” and said a “vast amount of work” went into identifying the 11 people arrested so far. He has also said the complexities of investigating social media abuse “cannot be underestimated” and thanked Facebook, Instagram and Twitter for responding quickly to police inquiries.

The online harms bill, once made law, will support the efforts of law enforcement to counter online hate.

4. We asked: Add an interstitial to disrupt potentially racist remarks, and ensure human checking on all posts flagged in this way.

Interstitials have been used during the Covid pandemic to attempt to halt the spread of dangerous disinformation and misinformation. Our recommendation was to expand this technology to disrupt racist posts and messages. We were already aware that certain terms or phrases can also be used to reclaim those words and phrases and also to call out racist content on the platforms. It was never our intention to interrupt this in any way. This is why we did not call for the banning of certain words and emojis.

Snap explained that they “curate and moderate public spaces on Snapchat, which prevents the opportunity for harmful content such as racist abuse to be surfaced publicly on the app, and meets the aim of stopping racist remarks from being broadcast.”

“In private spaces like chats, privacy is important to our community, and Snapchatters have a justifiable expectation that their private 1:1 and group communications are not being monitored or scanned (just as is the case with private phone calls and text messages, for example). An interstitial-style intervention is not something we would be comfortable deploying in private spaces on Snapchat. It is important to be clear, though, that just because chats are private does not mean there is nothing we can do about illegal or harmful content. Snapchatters can quickly and easily report content or individual accounts in chats, which enables our Trust & Safety team to review and take appropriate action. We believe this approach strikes the right balance between ensuring the privacy and safety of our community.”

Facebook & Instagram said they are ‘determined to do campaigns’ in this area. On August 10th 2021 they introduced a set of new features to help protect people from abuse on Instagram including:

  • The ability for people to limit comments and DM requests during spikes of increased attention;
  • Stronger warnings when people try to post potentially offensive comments;
  • The global rollout of the Hidden Words feature, which allows people to filter abusive DM requests.

Twitter told us that in May 2021 they had launched prompts across iOS and Android encouraging people to pause and reconsider potentially harmful or offensive replies before they hit send, starting with accounts with enabled English-language settings.

Conclusion: While we support the roll out of tools to help users, including footballers, to have more control over what they see and don’t see, we feel that the emphasis must be on the perpetrators of racist abuse. The platforms have talked about opportunities for ‘reframing and education’ before an offending account is suspended or disabled. We support this, but the education happens after the abuse has been posted. We believe that an interstitial creates an opportunity for ‘reframe and education’ before the abuse is posted. This interstitial can either ask the simple question ‘Are you sure you want to post this?’ or can be more detailed recognising the very clear differences between abuse, reclaiming and calling out.




The Conscious Advertising Network is a voluntary coalition of over 70 organisations on a mission to stop advertising abuse.