If you write a newsletter, run a blog, or sell on Etsy, the EU AI Act may well apply to you.
What the EU AI Act actually requires from content creators
If you write a newsletter, run a blog, sell on Etsy, or post anything to a public audience that includes EU readers, some version of this law probably applies to you.
Fifteen weeks. That’s how long you have before the EU’s AI Act transparency obligations become enforceable, and I keep seeing the same thing: content creators assume this law is aimed at tech companies, not at them.
It isn’t.
If you write a newsletter, run a blog, sell on Etsy, or post anything to a public audience that includes EU readers, some version of this law probably applies to you. The question is how much.
This article covers what your obligations actually are, who they apply to (including people outside the EU), where the editorial review exception gives you a legitimate way out, and what you need to do differently for images compared to text. At the end, there’s a draft editorial review policy you can adapt and keep on file as supporting documentation for that exception.
What the law actually says
The relevant part is Article 50 of the EU AI Act, which becomes applicable on 2 August 2026. It covers “deployers” of AI systems, which is the law’s word for anyone using AI-generated content in a professional context.
The core obligations are two things: if you publish AI-generated images, audio, or video that looks realistic enough to be mistaken for real (what the law calls a “deepfake,” though the definition is broader than you might think), you have to label it. And if you publish AI-generated text on matters of public interest, you have to label that too, unless a real person reviewed it and takes editorial responsibility for it.
The law also has reach beyond EU borders. If your content is directed at users in the EU, it applies to you, regardless of where you’re based or what domain extension your site uses. A newsletter with German subscribers is covered. An Etsy shop with French customers is covered.
Who this hits hardest
The confusing part of this law is that it draws a line between “professional” and “personal” use. Private individuals generating AI content for purely personal purposes are exempt, with responsibility falling on the platform instead. The moment you’re operating with any commercial or reputational purpose, though, you’re in scope.
In practice, that means:
Substack and newsletter writers. If you cover any topic that could be called “public interest” (news, finance, politics, health, even industry commentary counts), AI-generated text you publish without substantive human review needs a label. The law is deliberate about what “human review” means: it’s not a quick skim. You need to be able to show that a named person reviewed the content and takes responsibility for it. A disclosure buried in your “about” page probably doesn’t cut it.
Medium writers and bloggers. Same situation. The “public interest” question is genuinely ambiguous, and the Commission hasn’t given a clean definition yet. My read is that anything opinion-forming or informational lands in scope, which covers most serious blogging.
Etsy sellers and product creators. This one catches people off guard. If you’re generating product images with AI, those images need to be labeled when shown to EU buyers. The rule on images is actually stricter than the text rule because there’s no editorial review exception for visuals. If AI made it, you disclose it.
The editorial review exception (and what it actually requires)
There’s one major exit ramp for text content: if the AI-generated content “has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication,” labeling isn’t required.
This is good news for most serious writers who use AI as a tool rather than a ghostwriter. If you’re using Claude or ChatGPT to generate a first draft and then genuinely rewriting, editing, and taking responsibility for what goes out, you probably don’t need a label.
I say “probably” because the second draft of the EU’s Code of Practice on this (published in March 2026, with a final version due in June) makes clear that “editorial responsibility” is more than just clicking publish. You need documented procedures showing who reviewed the content, what the review involved, and who is accountable. For a solo newsletter writer, that might just be a written policy in your own records. For a larger publication using AI at scale, it’s a more formal paper trail.
The important thing is that a light touch doesn’t count. The law says “genuine” human review. Correcting a typo and changing a headline is not the same thing as editing.
The image situation is simpler and also more demanding
For images, audio, and video, the standard is less ambiguous. If AI generated or substantially manipulated it, and it could plausibly be mistaken for real, label it. There’s no editorial review exception for visuals.
The law does carve out space for “artistic, satirical, or fictional” works, where only minimal disclosure is required and labeling shouldn’t “unreasonably impair” the creative work. A clearly stylized AI illustration is different from an AI-generated photo designed to look like a real photograph of a real person.
For Etsy sellers using Midjourney or similar tools for product mockups or listing images: the question is whether those images could be mistaken for photographs of real products. If the answer is yes, you’re likely in scope.
What the labels actually look like
The final Code of Practice, due in June 2026, will specify a uniform “AI” visual icon for the EU market, along with short text like “Generated with AI” or “Manipulated with AI.” The label has to appear at the moment of first exposure, not buried in a footer or terms page.
For text on a web page or newsletter, that probably means something at the top of the article or in the byline. For images, it means on or immediately adjacent to the image. For longer audio or video, repeated disclosure during the content.
The exact design standards aren’t finalized yet. Given that the Code is due in June and enforcement starts in August, there’s not a lot of runway to iterate.
What about the enforcement timeline?
The EU’s Digital Omnibus proposal (adopted in November 2025) does propose simplifications and timeline adjustments for high-risk AI rules, but the transparency obligations in Article 50 are not in that category. As far as the official sources show, 2 August 2026 is the date.
That could change. The Commission has been willing to adjust timelines before. But counting on a deferral that hasn’t been announced is a gamble I wouldn’t take, especially since the penalties for non-compliance can reach €15 million or 3% of global annual turnover (though for individuals and small operators, the lower figure applies).
What to actually do before August
If you write about news, politics, health, finance, or anything informational and you use AI for text: decide now whether you’re going to label, or document your editorial review process well enough to rely on the exception. Both are legitimate. Winging it is not.
If you use AI to generate images for a public audience that includes EU readers: plan for a label. The artistic exception is narrow and situation-dependent.
If you’re running an Etsy shop with AI-generated product photography: this one is genuinely underreported in the creator community, and August is closer than it feels.
The Commission’s final guidelines on Article 50 are supposed to arrive in Q2 2026, which means they might land with only weeks to spare before enforcement. I’d rather have a process in place before those guidelines drop than scramble after.
One last thing: the law applies to content “directed at” EU users. That’s intentionally broad language. If you have any meaningful EU readership or customer base, the safer assumption is that you’re in scope.
Draft Policy
If you’ve read this far and your honest answer is “I review and edit everything I publish, AI-assisted or not,” you’re probably in reasonable shape. What you may be missing is the paper trail.
The downloadable policy below is a working template you can adapt to your own workflow. Fill in the sections that apply, delete the ones that don’t, and keep a dated copy somewhere you can find it. It’s not a legal filing, and you don’t need to publish it. Its job is to show, if anyone ever asks, that a real person reviewed the content and takes responsibility for it.
That’s what the law requires. A document that says so, signed by you, stored somewhere with a date on it, is the foundation of your compliance position.
Note that this template is provided for informational purposes only and does not constitute legal advice. It is intended to help independent content creators build a documented editorial process in preparation for the EU AI Act’s Article 50 transparency obligations. The legal position for individual creators may vary depending on their specific activities, audience, and jurisdiction. If you have questions about your compliance obligations, consult a qualified lawyer with expertise in EU technology law.
Policy Download…
Click below to download the draft policy as MS Word or PDF…
I’m not a lawyer, and this isn’t legal advice. For anything involving real compliance decisions, talk to someone who practices EU technology law. The official Article 50 text, the Commission’s Code of Practice drafts, and the EU AI Office’s guidance pages are all publicly available and worth reading directly.
