| posted by

US Senators push Facebook, Twitter and YouTube to clamp down on deepfakes

Facebook, YouTube, Twitter, Snap and other tech giants have been asked by US senators to “develop industry standards” to tackle deepfakes before they have a “corrosive impact on democracy”.

Deepfakes – video, audio and imagery imperceptibly distorted through machine learning or CGI (or both) – have gained traction in recent years, with many open-source methods of generating clips having come to the fore.

Now, two members of the US Senate Intelligence Committee are calling on digital platforms to come up with a blueprint to remove and archive content that has been manipulated this way.

Democratic Senator Mark Warner of Virginia and Republican Senator Marco Rubio of Florida have penned letters to 11 companies, including Twitch, Reddit and LinkedIn, to “put plans in place to address the attempted use of these technologies.”

The pair wrote: “As concerning as deepfakes and other multimedia manipulation techniques are for the subjects whose actions are falsely portrayed, deepfakes pose an especially grave threat to the public’s trust in the information it consumes; particularly images and video and audio recordings posted online.”

“If the public can no longer trust recorded events or images, it will have a corrosive impact on our democracy,” they added.

President Trump, Obama, the late John F Kennedy and even Mark Zuckerberg have fallen victim to deepfake creators over the past 18 months; alarming observers and fuelling concerns that the tech could be used to spread fake news, influence elections and further undermine trust in digital media. 

The technology could also pose fresh brand safety challenges for marketers, or see their brand message manipulated in the media

Pointing to research from Pew which found that two-thirds of Americans get their news on social networks, Warner and Rubio's letter poses several questions about how the companies currently police deepfakes, the technology they use to do so and how users are notified when something is removed or replaced. They also ask how companies notify potential victims of the practice.

How are tech giant’s tackling deepfakes?

Just last week, Google has released a database of 3,000 deepfakes featuring actors and a variety of tools to alter their faces. The company hopes the move will help researchers build features to remove the videos – which can be used to spread propaganda and conspiracy theories.

"Since the field is moving quickly, we'll add to this dataset as deepfake technology evolves over time and we'll continue to work with partners in this space,” said the company in a blog post.

"We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media. While many are likely intended to be humorous, others could be harmful to individuals and society."

Facebook’s policy communications manager Andy Stone said in a statement that deepfake video development and the potential for use by bad actors required “a whole-of-society approach”.

"We are committed to working with others in industry and academia to come up with solutions,” he added.

Twitter said tackling the issue was being covered by its wider efforts to battle the spread of misinformation, while LinkedIn said the company removes "confirmed fake content" and was investing in systems that gave it ability to monitor, detect, and remove inappropriate content.

Featured by The Drum

1