An FAQ from the future — how we struggled and defeated deepfakes

An FAQ from the future — how we struggled and defeated deepfakes

technology By Jan 07, 2024 No Comments

The Dawn of the Deepfake Era

Cast your mind forward. It’s Nov. 8, 2028, the day after another presidential election. This one went smoothly — no claims of rampant rigging, no significant taint of skulduggery — due in large part to the defeat of deepfakes, democracy’s newest enemy.

Is such a future possible? So far, neither government nor the tech industry has agreed on effective guardrails against deepfakes. But this FAQ (from five years in the future) shows that the events of 2024 may well force the issue — and that a solution is possible.

The Advent of Deepfake technology

Late in 2022, sophisticated low-cost AI software appeared that made it easy to create realistic audio, video, and photographs — so-called deepfakes. As these generative AI programs rapidly improved, it grew clear that deepfake content would be a danger to democracy.

Political deepfakes — both audio and video — soon emerged: President Biden announcing that Americans would be drafted to fight in Ukraine. A photo of Donald Trump hugging and kissing Dr. Anthony Fauci. Sen. Elizabeth Warren (D-Mass.) telling MSNBC that Republicans shouldn’t be allowed to vote in 2024. Eric Adams, the monolingual mayor of New York, speaking Spanish, Yiddish, and Mandarin in AI-produced robocalls.

The Quest for regulation

Very quickly, the White House, the European Union, and major technology companies all launched wide-ranging AI regulation proposals that included “watermarking” AI content — inserting ID labels, a permanent bit of computer code, into the digital file of any AI-generated content to identify its artificial origin.

But AI rule-setting proved complex, and labeling exemplified the quandaries: Would AI watermarking be legally required? How would it be enforced? As early as 2023, some cellphone cameras used AI in their image processing.

A Wave of Deepfake Chaos

The largest coordinated deepfake attack in history took place the day after the November 2024 election. Every U.S. social media channel was flooded with phony audio, video, and still images depicting election fraud in a dozen battleground states, highly realistic content that within hours was viewed by millions.

Debunking efforts by media and government were hindered by a steady flow of new deepfakes, mostly manufactured in Russia, North Korea, china, and Iran. The attack generated legal and civil chaos that lasted well into the spring of 2025.

A Breakthrough Emerges

The breakthrough actually came in early 2026 from a working group of digital journalists from U.S. and international News organizations. Their goal was to find a way to keep deepfakes out of News reports, so they could protect what credibility the mainstream media still retained.

It was a logical assignment: Journalists are historically ruthless about punishing their peers for misbehavior, breaking out the tar and feathers for even minor departures from factual rigor. Journalism organizations formed the FAC Alliance — “Fact Authenticated Content” — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarking system for dis- and misinformation.

The Rise of FACStamp

It would be possible to watermark pieces of content that weren’t deepfakes. And so was born the voluntary FACStamp on May 1, 2026. For consumers, FACStamped content displays a small “FAC” icon in one corner of your screen or includes an Audio FAC notice. FACStamps are entirely voluntary.

The newest phones, tablets, cameras, recorders, and desktop computers all include software that automatically inserts the FACStamp code into every piece of visual or audio content as it’s captured, before any AI modification can be applied. This proves that the image, sound, or video was not generated by AI.

Expansion of FACStamps

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it’s reassuring to know that the image wasn’t spun out or scrubbed up by AI.

In 2027, the stamp began to appear in Social Media. Any parent can artificially generate a perfectly realistic image of their happy family standing in front of the Eiffel Tower and post it or email it to envious friends. A FACStamp proves the family has actually been there.

The Right to Reality Act

A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites, and investment and real estate offerings.

Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign. The tagline: “Is that a FAC?”

Conclusion

The struggle against deepfakes has been arduous, but the emergence of FACStamp and the potential implementation of the Right to Reality Act offer hope for a future free from the damaging effects of AI-generated misinformation and disinformation. With the collaborative efforts of government, tech industry, media, and the public, it is indeed possible to overcome this threat to democracy.

Michael Rogers, an author and futurist, envisions a world where the authenticity of digital content is preserved and safeguarded, bringing about a future where a digital reality aligns more closely with the truth we seek and expect.

Source: latimes

No Comments

Leave a comment

Your email address will not be published. Required fields are marked *