Tech Against Terrorism

Can Tech Platforms Rely on Algorithms to Remove Terrorist Content?

April 27, 2022 Tech Against Terrorism Season 2 Episode 11
Tech Against Terrorism
Can Tech Platforms Rely on Algorithms to Remove Terrorist Content?
Show Notes

On this week’s episode, we’re exploring the role of automated solutions and Artificial Intelligence (AI) in tackling terrorist and violent extremist content and activity online. With the help of our expert guests, we delve into the historical use of machine learning algorithms for content moderation purposes, look at how they’ve developed over the last decade or so, and discuss their potential going forward. 

We consider some of the potential biases and ethical considerations around automated removal systems, such as the mistaken removal of war crime evidence or political speech in the Arabic language. Our guests explore how we can best utilise algorithms to tackle terrorist content, highlighting their potential for understanding patterns of terrorist behaviour online.

This week, Anne Craanen speaks to Adam Hadley, Founder and Executive Director of Tech Against Terrorism. We also hear from Dia Kayyali, director for advocacy at Mnemonic where they focus on the real-life impact of policy decisions made by lawmakers and technology companies about content moderation and related topics. And Chris Meserole, a fellow in Foreign Policy at the Brookings Institution and director of research for the Brookings Artificial Intelligence and Emerging Technology Initiative. Chris is also an adjunct professor at Georgetown University.

To find out more about Tech Against Terrorism and our work, visit techagainstterrorism.org or follow us on Twitter @techvsterrorism, where you can find resources on this topic.