Technology

Taliban-related content banned on Facebook, Instagram and WhatsApp

In this article

Taliban fighters with a vehicle on a highway in Afghanistan.
Saibal Das | The India Today Group | Getty Images

Facebook has banned the Taliban and any content that promotes it from the main Facebook platform, Instagram and WhatsApp.

The social media giant told CNBC Tuesday that it considers the Afghan group, which has used social media platforms to project its messages for years, to be a terrorist organization.

Facebook said it has a dedicated team of content moderators that is monitoring and removing posts, images, videos and other content related to the Taliban.

Afghanistan fell into the hands of the Islamic militant group over the weekend, as it seized the capital of Kabul as well as the Presidential Palace. After President Joe Biden’s April decision to withdraw U.S. troops from Afghanistan, the Taliban made stunning battlefield advances — and nearly the whole nation is now under the insurgents’ control.

A Facebook spokesperson told CNBC: “The Taliban is sanctioned as a terrorist organization under U.S. law and we have banned them from our services under our Dangerous Organization policies.”

Facebook said this means it will remove accounts that are maintained by or on behalf of the Taliban, as well as those that praise, support and represent them.

“We also have a dedicated team of Afghanistan experts, who are native Dari and Pashto speakers and have knowledge of local context, helping to identify and alert us to emerging issues on the platform,” the Facebook spokesperson said. 

Facebook said it does not decide whether it should recognize national governments. Instead, it follows the “authority of the international community.”

WhatsApp dilemma?

Reports suggest that the Taliban is still using WhatsApp to communicate. The chat platform is end-to-end encrypted, meaning Facebook cannot see what people are sharing on it.

“As a private messaging service, we do not have access to the contents of people’s personal chats however, if we become aware that a sanctioned individual or organization may have a presence on WhatsApp we take action,” a WhatsApp spokesperson reportedly told Vice on Monday.

A Facebook spokesperson told CNBC that WhatsApp uses AI software to evaluate non-encrypted group information including names, profile photos, and group descriptions to meet legal obligations.

Alphabet-owned YouTube said its Community Guidelines apply equally to everyone, and that it enforces its policies against the content and the context in which it’s presented. The company said it allows content that provides sufficient educational, documentary, scientific and artistic context.

A Twitter spokesperson told CNBC: “The situation in Afghanistan is rapidly evolving. We’re also witnessing people in the country using Twitter to seek help and assistance. Twitter’s top priority is keeping people safe, and we remain vigilant.”

“We will continue to proactively enforce our rules and review content that may violate Twitter Rules, specifically policies against glorification of violence, platform manipulation and spam,” they added.

Rasmus Nielsen, a professor of political communication at the University of Oxford, told CNBC it’s important that social media companies act in crisis situations in a consistent manner.

“Every time someone is banned there is a risk they were only using the platform for legitimate purposes,” he said.

“Given the disagreement over terms like ‘terrorism’ and who gets to designate individuals and groups as such, civil society groups and activists will want clarity about the nature and extent of collaboration with governments in making these decisions,” Nielsen added. “And many users will seek reassurances that any technologies used for enforcement preserves their privacy.”