Bolton Law - June 2026

AND THE LAW IS RACING TO CATCH UP Deepfakes Are Here

The TAKE IT DOWN Act One of the most significant federal measures to address deepfakes arrived in 2025 with the passage of the TAKE IT DOWN Act. This legislation directly targets non-consensual intimate imagery, including AI-generated images or videos of an individual designed to appear real. Under the law, distributing or threatening to distribute manipulated intimate media without permission is considered a criminal offense. The law also requires online platforms to respond quickly when victims report this type of content. Once notified, a platform must remove the material within 48 hours and take steps to prevent further circulation. Individuals who violate the law may face substantial penalties, including potential prison sentences. Consumer Fraud and the FTC Act The Federal Trade Commission Act (FTC Act) prohibits unfair or misleading business practices. If a company exaggerates what its AI technology can do or uses synthetic media to trick consumers, the Federal Trade Commission may step in. Deepfake scams are already becoming more sophisticated. In some reported cases, criminals have used AI-generated audio or video that imitates a trusted colleague or executive to convince someone to transfer large sums of money. When synthetic media is used to trick people for financial gain, it may fall under the purview of fraud enforcement. Why These Laws Matter The rapid rise of AI has opened the door to remarkable innovation, but it has also created new avenues for harm. Deepfakes can be used to manipulate public opinion, commit fraud, or damage someone’s personal or professional reputation. As we establish clearer rules around consent, deception, and digital manipulation, lawmakers are beginning to define how AI-generated media should be handled in the legal system. Next Steps Additional laws at both the federal and state levels are expected to address rising challenges such as political deepfakes, identity theft, and intellectual property concerns. For individuals and businesses alike, staying informed about these changes is becoming increasingly important. Synthetic media may be new territory, but the legal system is quickly catching up and working to ensure that powerful technology is used responsibly and never at the expense of someone’s well-being.

Is it real, or is it AI?

Years ago, the idea of artificial intelligence (AI) creating convincing videos of real people sounded like science fiction. Today, it’s reality. With just a few clicks, sophisticated AI tools can generate images, voices, or videos that look so authentic they can fool even careful viewers. While this technology can be used for harmless fun or creative projects, it also raises serious concerns. Deepfakes can be used to impersonate, spread misinformation, damage reputations, or create explicit images without consent. As the technology advances, lawmakers across the United States are moving quickly to put guardrails in place. Federal Laws Addressing Deepfakes Deepfake legislation is evolving rapidly to keep pace with advances in AI. The primary goal is to prevent misuse while still allowing for legitimate innovation. However, there are countless risks involved, including:

• Financial scams

• Defamation and misinformation

• Election interference

• Non-consensual explicit imagery

As lawmakers define when synthetic media crosses the line from creative expression into deception or harm, we get closer to protecting both people and institutions.

“With just a few clicks, AI can create videos so realistic they can fool even careful viewers.”

2 | (281) 351-7897

Made with FlippingBook Ebook Creator