The Dangers of Manipulated Media in the Midst of a Crisis
from Digital and Cyberspace Policy Program and Net Politics

The Dangers of Manipulated Media in the Midst of a Crisis

While social media companies have made some efforts to address manipulated media—photos, video, or audio that have been edited, distorted, or misappropriated—more needs to be done. Social media companies, researchers, and lawmakers should collaborate to create coherent policies for regulating manipulated media that safeguard civil liberties.
Speaker of the House Nancy Pelosi (D-CA) rips up U.S. President Donald Trump's speech alongside Vice President Mike Pence following the State of the Union address.
Speaker of the House Nancy Pelosi (D-CA) rips up U.S. President Donald Trump's speech alongside Vice President Mike Pence following the State of the Union address. REUTERS/Jonathan Ernst

Megan Lamberth is a research assistant for the Technology and National Security program at the Center for a New American Security.

In the immediate aftermath of the U.S. drone strike that killed Iranian General Qasem Soleimani, the internet was flooded with purportedly real-time information about the circumstances surrounding the attack and possible Iranian retaliation. The event played out on social media as one would expect. On Twitter, thousands of users claimed access to facts on the ground. In one instance, a Twitter user tweeted out a photo, alleging that an Iranian missile struck Iraq’s Ain al-Asad air base. The tweet spread rapidly. Eventually, reputable news sources stepped in and debunked the story—revealing that the tweeted image was from an entirely separate incident months earlier.

More on:

Technology and Innovation

Social Media

It is in these moments of crisis and uncertainty that disinformation can gain a remarkable foothold and a piece of manipulated media could spark mass panic. The escalating tension between the United States and Iran, showcased in real time on social media, is a stark reminder that manipulated media—photos, video, or audio that have been edited, distorted, or misappropriated—have the capacity to incite violence, disrupt elections, and harm diplomatic relations.

Lawmakers, social media companies, research labs, and technology experts are working to address the proliferation of manipulated media, but these efforts have so far been insufficient. For real progress to be made, social media companies must play a greater role in limiting the harmful effects of manipulated media. Also, policymakers, tech companies, and researchers need to cooperate to ensure that today’s digital environment is safeguarded against malicious actors looking to sow disinformation, while also protecting the rights of Americans online.

Social media companies feel growing pressure to respond to the threat of manipulated media, particularly as the U.S. presidential election intensifies. Last week, Twitter and YouTube announced new policies for combating manipulated media. YouTube’s new policies focus on how it plans to police its platform during the election season—vowing to remove content that attempts to mislead users about the election or voting process. Twitter’s new policies are more thorough and provide guidelines for when the platform will remove manipulated content that is “likely to impact public safety or cause serious harm.” In cases where a piece of content is manipulated but does not pose a threat to public safety, Twitter will label a tweet to warn users of the distortion.

Both Twitter and YouTube’s new policies are broader than Facebook’s, which announced its policy on manipulated media earlier this year. Facebook’s policy focuses specifically on videos identified as deepfakes—media that has been “edited or synthesized” using artificial intelligence or machine learning—but permits content that is created for parody or satire, a rather ambiguous and subjective qualification.

Facebook and Twitter’s new policies were put to test after President Trump posted a doctored video of Nancy Pelosi (D-CA) ripping up a copy of his State of the Union speech as he acknowledged military heroes and other guests. In reality, Pelosi ripped up her copy after the president finished his speech. Democratic lawmakers called on Facebook and Twitter to remove the edited video, but neither platform acquiesced. Twitter released a statement clarifying that their new policy does not take effect until early March, while Facebook claimed the manipulated video did not meet their standards for removal.

More on:

Technology and Innovation

Social Media

While still a nascent technology, deepfakes have spawned numerous efforts to address them. Facebook, Amazon, and Microsoft are teaming up to support deepfake detection research, with the goal of creating tools that can detect and flag manipulated content. Government agencies, like DARPA, and start-ups, such as Amber and Deeptrace, are each racing to develop similar technology that can verify the veracity of a video or audio clip. Last week, Jigsaw, a subsidiary of Google’s Alphabet Inc., unveiled an experimental new platform—the Assembler—which uses machine learning to detect image manipulation, including deepfake images. While Jigsaw does not plan to offer the tool publicly, it aims to help “fact-checkers and journalists identify manipulated media.”

These new policies from Facebook, Twitter, and YouTube are a step in the right direction, but tech companies should increase collaboration with each another—sharing information and best practices—to combat manipulated media on their respective platforms. Tech companies will have to be transparent with users by creating clear and consistent company guidelines for how they police their platforms.

Congressional leaders from both sides of the aisle are wrestling with how to address manipulated content. Last October, Senators Mark Warner (D-VA) and Marco Rubio (R-FL) sent letters to eleven social media companies, urging the platforms to develop industry standards for “removing, archiving, and confronting the sharing of synthetic content.”

Lawmakers have also proposed a number of bills addressing manipulated media. One proposal, the Identifying Outputs of Generative Adversarial Networks (IOGAN Act), would require the National Science Foundation (NSF) and National Institute of Standards and Technology (NIST) to work with technologists in academia and the private sector to develop tools that can detect manipulated media. The proposed Deepfake Report Act of 2019 would direct the Department of Homeland Security to publish an annual report assessing the current state of “digital content forgery” technology. Lawmakers should work closely with tech companies to ensure that social media companies’ manipulated content policies are consistent with First Amendment liberties and are robust enough to protect against the proliferation of manipulated media.

The digital information environment is infused with a near constant stream of disinformation. Advances in artificial intelligence and machine learning are simultaneously making manipulated content more advanced and easier to create. Social media companies should work in tandem with one another, as well as with lawmakers and research labs, to reduce the malicious effects of manipulated media. If the United States doesn’t properly prepare for the increasing deluge of manipulated media, we may find ourselves living in a society where truth and facts are simply in the eyes of the beholder.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail