Most of the press regarding generative AI (artificial intelligence that can create new — a word we use loosely — art and writing) has centered around the Hollywood writers’ and actors’ strikes or how schools will handle AI-written papers. Missing from the conversation is how generative AI can be used to create images, sound clips and videos that appear very real but are, in fact, manufactured.
“Deepfakes” go beyond the Photoshop alterations and air-brushing we’re familiar with from magazines and commercials. As Merriam Webster defines it, a deepfake is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
Think the of the way late night comedians might piece together video or audio to make it seem like a public figure said something they didn’t: Envision the different backgrounds and outfits flickering across the screen in quick succession, the flow of the sentences disjointed and the background audio constantly changing. It’s funny in part because it’s so obviously fake.
Now take that concept, but apply it to one single video clip. Smooth out the audio so it sounds like each word was spoken intentionally in that order. Match the movement of the lips to the words and other motions to the context. Imagine just how real that can look.
That is a deepfake. And they can be very hard to spot.
Because of this, Google announced that political ads that use generative AI to create audio or visuals and appear on YouTube and other Google platforms must contain a prominent disclosure. This is a voluntary step for the internet giant, which we much appreciated. But instead of hoping Google follows through and that other platforms follow suit, this should be a written and enforceable policy across the board.
A bipartisan group of senators has introduced the Protect Elections from Deceptive AI Act, which would amend the Federal Election Campaign Act of 1971 (FECA) to ban the distribution of “materially deceptive” AI-generated political ads relating to federal candidates or certain issues that seek to influence a federal election or fundraise.
The senators have the right idea, but a total ban is likely to face challenges and limiting it only to federal candidates ignores how cheap and ubiquitous generative AI has become — it could just as easily be used to malign state-level or local candidates as it is federal ones.
Instead, Congress should take its cue from Google and mandate clear, prominent disclosures on any political ads that contain AI-generated material, whether that material is deceptive or not. Such legislation should apply to ads in all mediums: physical and online still images; streaming, TV and even radio or podcast commercials; social media or internet pop-ups.
AI-created images, videos and audio are becoming more and more realistic, and it will become harder and harder to discern something that is real from something that has been manipulated or fabricated. Regardless of creator’s intent, we should be warned if what we’re seeing or hearing is AI-generated.