Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Tech Giants Pledge to Curb AI-Made Election Misinformation
AI's Speed and Scale of Deception Is 'Unprecedented,' Says US SenatorTwenty technology giants including Google and Meta pledged Friday to combat the presence of artificially generated deepfake content meant to deceive voters as more than 4 billion people in more than 70 countries prepare for elections this year.
See Also: Safeguarding Election Integrity in the Digital Age
The companies said they will "work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency." Signatories also include Amazon, IBM and Microsoft as well as OpenAI, Anthropic and Stability AI.
The calculation that election misinformation could affect more than 4 billion people in more than 70 countries comes from a study by The Economist. Misinformation is not a new threat, but governmental officials and academics warn that this year's AI-driven deluge could be large enough to affect the outcomes of elections.
The accord, unveiled by the companies during the annual Munich Security Conference gathering of world leaders in Germany, targets AI-led audio, video and image deepfakes that can mimic election participants or provide false voting information.
"We think about the techniques that were used in 2016 and 2018 and 2020 - they were literally child's play compared to the threats and challenges we face across the board," said U.S. Sen. Mark Warner, chair of the Senate Intelligence Committee, at the conference. A 2019 bipartisan committee report says that the Russian government had authorized and directed a disinformation campaign to influence the outcome of the 2016 presidential contest.
"The scale and speed with which AI tools can cause deception, misrepresentation, out-and-out lying, is also unprecedented," Warner said.
Social media platforms in the United States are not typically liable for the third-party content - including disinformation - that appears on their sites, due to a 1996 law known as Section 230 that shields online intermediaries from lawsuits that involve user-generated content.
Section 230 has so far withstood attempts to undo it, and the U.S. Congress is far from approving a comprehensive regulation that would place restrictions on generative AI models. The Biden administration, as a result, has relied on voluntary measures, many from the same companies that joined Friday's Munich Security Conference pledge (see:7 Tech Firms Pledge to White House to Make AI Safe, Secure).
Regulatory agencies have used existing authorities to enact restrictions on deepfakes within their particular jurisdictions, but critics say these measures can only go so far to stop the propagation of deepfakes. At the state level, 27 legislatures have introduced bills to regulate deepfakes in elections.
A major part of tech companies' voluntary pledges is watermarking - the practice of embedding subtle "noise" into content produced using generative AI algorithms in order to make the output identifiable as artificial. Experts told Information Security Media Group late last year that the method is as likely to fail as succeed, as some of the techniques can easily be broken (see: Watermarking Generative AI: Hype or Cure-All?).
"This work is bigger than any one company and will require a huge effort across industry, government and civil society," said Meta Global Affairs President Nick Clegg. "Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge."