My honest AI manifesto
Where I draw the line, and why I wrote it down.
People who read about AI regulation for a living get asked a particular question more than most: do you actually use this stuff?
The honest answer is yes, every day. I use AI to get through research faster, to test arguments before I publish them, to turn a rough outline into something worth editing. It would be strange to write seriously about AI governance while keeping the tools at arm’s length out of some vague sense of principle.
But there are decisions I won’t hand to AI, and I’ve been thinking for a while that I should say what they are in plain terms rather than leaving readers to guess.
The governance conversation tends to focus on what companies and regulators should do. Less gets written about what individual writers and creators should decide for themselves, quietly, before anyone asks. I think that gap matters. If you’re arguing that AI systems need clear rules and human accountability, it’s worth being able to point to your own.
So I wrote mine down. It covers what I will do with AI, what I won’t, and the reasoning behind each position. The positions aren’t complicated, but having them written down means I can be held to them, which is the whole point of writing anything down.
You can read the full manifesto here:
I’ll update it when my thinking changes. I expect it will.
DISCLAIMER

