Google Images

New Yorks Synthetic Performer Law: What Disclosure Does (and DoesntProtect You From

Published 30 January 2026

If you work in advertising or marketing, you’ve probably heard the buzz. Starting in June 2026, the state of New York will require disclosure when any commercial imagery or video includes so-called “synthetic performers,” human-like figures created using generative AI.

This applies to any advertisement broadcast, displayed or distributed within the state of New York. Obviously, given New York’s status as a global media hub, this effectively creates a national standard for any brand that doesn’t want to run geo-fenced ad campaigns. 

At first glance, the rule sounds straightforward: If the “person” isn’t real, say so.

But in practice, the law raises a much bigger (and often overlooked) question: Even if you disclose that an image is synthetic, is it actually safe to use? 

That’s where many teams are discovering that disclosure alone does not eliminate risk. 

Even if you disclose that an image is synthetic, is it actually safe to use?

A Quick Recap: What the Synthetic Performer Law Does 

New York’s Synthetic Performer law is designed to promote transparency in advertising. In simple terms, it requires creators to clearly disclose when a human-like figure in a commercial image or video was generated or materially altered using AI, rather than being a real person. 

The law focuses on consumer understanding, not creative technique. It doesn’t ban AI imagery. It doesn’t require special technology. And it doesn’t dictate how images must be created. It simply mandates that consumers are not misled about whether they’re looking at a real human being. 

Importantly, the proposed law may not address other legal issues that could arise from using synthetic imagery, which leads to the part that many people miss. 

Disclosure

Expect to see disclosure messages on AI-generated “humans”.

Disclosure Is Not a Legal Shield 

A disclosure can help satisfy a transparency obligation. It does not automatically protect you from:

  • Copyright infringement claims. 
  • Right-of-publicity disputes. 
  • False endorsement allegations. 
  • Stock image licensing violations. 

In other words, you can fully comply with the Synthetic Performer law and still face serious legal exposure if the image itself is problematic. This distinction matters, especially as AI-generated visuals become more realistic and more imitative. 

Why Now? 

This law was born out of two brewing crises in the creative industry: 

    • Talent Replacement: Some brands began using “AI models” to avoid paying residuals to human actors and, in some cases, to bypass the cost of meeting diversity requirements by simply “generating” diverse faces. 
    • Copyright & Likeness Piracy: There is a growing “grey market” where AI is trained on the likenesses of real influencers or celebrities without their consent. By requiring disclosure, the law makes it harder for brands to pass off a “lookalike” synthetic human as the real thing. 

    The law also protects the likeness of deceased individuals from being resurrected as “Synthetic Performers” without the estate’s consent. Additionally, if a brand uses a “Digital Replica” (an AI version of a real person), they must have a written contract specifically authorizing the use of that digital double. 

    The law is particularly harsh on synthetic performers used to deliver “testimonials.” If an AI-generated person says, “This cream cured my eczema,” and they aren’t a real person with a real experience, the advertiser could face both New York state violations and Federal Trade Commission “Deceptive Advertising” charges. 

    Soft and Smooth

    Deceptive advertising means you can’t trust everything you see…

    The “Look-Alike” Problem in AI Imagery 

    Generative AI systems dont create images in a vacuum. They produce new images by learning from vast datasets of existing photographs, illustrations, artwork and visual styles. As a result, AI-generated “people” sometimes: 

    Look a like

    If all the models were fake, how believable would the ads be?

    • Closely resemble real individuals. 
    • Echo well-known stock photos. 
    • Replicate distinctive poses, lighting or compositions. 
    • Look uncomfortably similar to working models, celebrities or influencers. 

    Even if no real person was intentionally copied, substantial similarity can still trigger legal claims.

    And heres the key point: A label that says “this person is synthetic” does not prevent or cure infringement or impersonation risk. Courts and rights-holders care about similarity, not intent or disclosures. 

    Where ImageVerifier Fits (and Where It Doesnt) 

    This is where it’s important to be precise. ImageVerifier is not an AI-detection tool. It does not determine whether an image was created by AI. It does not classify images as “synthetic performers.” It does not provide legal advice or compliance determinations. 

    What ImageVerifier does do is something different, necessary and complementary.  

    ImageVerifier helps identify high-risk visual similarity by flagging website images that closely resemble copyrighted or protected works. If an image (AI-generated or otherwise) appears too close to existing imagery, ImageVerifier warns users that using the image may carry elevated risk. That distinction is intentional. 

    Discover visual risks that arent always obvious to the human eye. 

    The question ImageVerifier helps answer is not:  “Is this image AI-generated?” 

    It’s: “Is this image too similar, too close for comfort, to something I shouldn’t be using on my site?”

    Why This Matters Under the Synthetic Performer Law 

    The new disclosure requirement may encourage more businesses to experiment with synthetic people. After all, if the image is labeled as such, per the new law, users may feel “safe.” 

    But synthetic imagery can actually increase similarity risk, because of the “look-alike” phenomenon outlined above. 

    Before publishing any image, synthetic or not, business owners and advertisers still need to ask whether it could expose them to claims from photographers, individuals, agencies or stock image libraries. 

    That’s where ImageVerifier adds value: as a website image risk-checker, not a disclosure mechanism. 

    A Practical, Responsible Workflow 

    For teams preparing for June 2026 and beyond, a layered approach may work best. 

    • Creative review: Determine whether an image includes a human-like figure and whether disclosure may be required under applicable laws. 
    • Disclosure decision: Decide how and where to inform consumers if a synthetic performer is used. 
    • ImageVerifier check: Evaluate whether a website image (AI-generated or traditional) closely resembles copyrighted or protected works.
    • Risk assessment: If ImageVerifier flags elevated risk, it gives you an opportunity to reconsider use, modify or delete the image from your site, or consult legal counsel. 
    • Documentation: Retain records showing good-faith review and diligence.

    Synthetic imagery can actually increase similarity risk.

    This approach recognizes a simple truth: Compliance and safety are related, but not identical. 

    What Disclosure Laws Dont Change 

    Its worth being clear about what the Synthetic Performer law does not do. 

    • It does not redefine copyright standards. 
    • It does not override right-of-publicity laws. 
    • It does not guarantee immunity for AI-generated images. 
    • It does not eliminate the need for image review. 

    Disclosure helps consumers understand what they’re seeing. It does not absolve business and website owners or advertisers of responsibility for what they publish.

    Transparency Is the Floor, Not the Ceiling 

    Sheet of Glass

    Transparency doesn’t equal safety.

    New York’s law is part of a broader shift toward transparency in digital media. Similar discussions are unfolding at the federal level and in other jurisdictions worldwide. 

    As that landscape evolves, one thing is becoming clear: “We disclosed it” is no longer the end of the conversation. Businesses, advertisers, agencies, websites and brands are increasingly expected to show that they took reasonable steps to avoid harm, whether that harm is consumer deception, copyright infringement or reputational damage. 

    One of the many things ImageVerifier does is support that effort by highlighting visual risks on your website: risks that aren’t always obvious to the human eye. 

    The Bottom Line 

    New disclosure laws answer an important question: Are consumers being told the truth about what they’re seeing? 

    But they leave another question unanswered: Is this image safe to use at all? 

    The Synthetic Performer Law is designed to protect human jobs and consumer trust. For marketers and advertisers, the era of “free” human-like talent is effectively over. 

    ImageVerifier doesn’t replace legal judgment or determine regulatory obligations. What it offers is an additional layer of insight: helping teams spot high-risk resemblance before the image attracts unwanted (and potentially expensive) attention.

    In an era where images are easier than ever to create — and easier than ever to challenge — that kind of foresight can make all the difference. 

    Disclaimer: The information on this website is provided for general information purposes only and does not constitute legal advice. Nothing on this site creates an attorney–client relationship. Copyright laws vary by situation, and you should consult a licensed copyright attorney for advice regarding your specific circumstances.