What are deepfakes and how do they differ from other forms of manipulated video? What are the steps platforms are taking to reduce content that they consider harmful?
This week, we’ve been reading about strategies for identifying AI-powered disinformation and recent moves by platforms to reduce harmful behavior by banning groups and cutting off ad money from creators.
We hope you enjoy these articles – and feel welcome to share ideas for other readings with us here or on Twitter @DisinfoIndex.
How do we work together to detect AI-manipulated media? (WITNESS Media Lab, 21 June 2019)
How Kamala Harris conspiracies festered online before making it to Trump Jr. (CNN, 1 July 2019)
New deepfake tech turns a single photo and audio file into a singing video portrait (The Verge, 20 June 2019)
Ravelry, The Knitting Website, Bans Trump Talk And Patterns (NPR, 24 June 2019)
Reddit ‘quarantines’ its biggest pro-Trump message board (The Guardian, 26 June 2019)
Seeing Isn’t Believing: The Fact Checker’s guide to manipulated media (Washington Post, 25 June 2019)
Top Takes: Suspected Russian Intelligence Operation (DFRLab, 22 June 2019)
What Big Tech Is (and Isn’t) Doing to Fight Anti-Vaccine Misinformation (The Wall Street Journal, 2 July 2019)
YouTube looks to demonetization as punishment for major creators, but it doesn’t work (The Verge, 25 June 2019)