When writing well becomes suspicious: my AI authenticity problem
My formal writing style, dyslexia, and grammar tools mean I now 'write like AI'. Here's why authenticity policing based on style is an ad hominem fallacy.
When writing well becomes suspicious: my AI authenticity problem
I wrote a LinkedIn post last week. Drafted it myself. Reviewed it myself. Edited it myself. Posted it myself.
It got flagged as AI-generated.
This is becoming a recurring problem. And I find it genuinely frustrating - not because I'm precious about my writing, but because the accusation reveals something broken about how we now evaluate authenticity.
The irony of a lifetime's habits
Here's my situation. I have a formal writing style that I developed at university. Structured arguments. Clear topic sentences. Logical flow from paragraph to paragraph. I cannot help it. Twenty years of academic and professional writing have made this my default mode.
I'm also dyslexic. Which means I've relied on tools like Grammarly for years - not to rewrite my content, but to catch the spelling and grammatical errors my brain doesn't flag. For someone whose natural writing process involves transposing letters and missing obvious mistakes, a grammar checker isn't a luxury. It's a necessity.
The combination of these two factors - formal training and assistive tooling - means my writing has always been polished. Clean sentences. Correct punctuation. Proper structure.
And now, apparently, that's exactly the tone of AI-generated content.
The thing I spent years developing? The discipline I cultivated through countless essays and technical documents? It now reads as "assembled, not written" - to quote Andrew Pettifer's recent observation about AI-polished LinkedIn posts.
The irony is not lost on me.
The "authenticity policing" problem
What frustrates me most isn't the personal inconvenience. It's the emergence of what I've seen described as "authenticity policing" - dismissing content based purely on how it appears to have been created, without engaging with what it actually says.
Think about this for a moment. Someone reads a post. The grammar is correct. The structure is logical. The arguments flow clearly. And their conclusion is: "This sounds like AI, therefore it's invalid."
That's an ad hominem fallacy wearing a new disguise. Instead of attacking the person making the argument, we're attacking the perceived method of production. The content itself - the ideas, the evidence, the conclusions - becomes irrelevant. Only the stylistic fingerprint matters.
This is intellectually lazy. And it penalises exactly the wrong people.
Who gets caught in this trap?
When we equate "polished" with "inauthentic", we're making a statement about what human writing should look like. Apparently, it should have quirks. Awkward phrases. Grammatical imperfections. Human fingerprints, as the critics say.
But whose fingerprints count as human?
If you're neurotypical and naturally write in a conversational, colloquial style, congratulations. Your authenticity is secure.
If you're dyslexic and need grammar tools to produce clean copy, you're now suspect.
If you learned formal writing through academic training and can't easily switch it off, you're suspicious.
If English isn't your first language and you use tools to correct errors that native speakers wouldn't make, you're flagged.
The people most likely to be falsely accused of AI usage are often those who've worked hardest to overcome barriers to clear communication. That's not just unfair. It's perverse.
The deeper absurdity
Let me be direct about what I find absurd here.
We've spent decades telling people to write clearly. Use correct grammar. Structure your arguments. Make your point effectively. These were professional virtues.
Now they're evidence of machine assistance.
The logical conclusion of authenticity policing is that deliberately writing badly becomes proof of humanity. We should leave in our typos. Keep our rambling sentences. Preserve our structural chaos. Because apparently, competence is now robotic.
I refuse to participate in this race to the bottom.
What I actually use AI for
Let me be transparent, since transparency seems to be what's demanded.
Yes, I use AI tools. I used one to write this post.
But here's the thing: I fed it my opinions from a LinkedIn comment thread. I reviewed the output. I iterated on it. I'm posting it because it accurately expresses what I think - not because I needed to fill a content calendar.
This is the same process I've always followed: research, draft, review, refine, publish. The difference is that the drafting step now involves AI. The opinions, the arguments, the frustrations - those are mine. The AI helped me articulate them more quickly than I could have typed them out myself.
When I express frustration about being falsely flagged, that's genuine. When I argue that authenticity policing is intellectually lazy, I believe that. The AI didn't invent these positions. It helped me express them.
This is different from AI slop - content generated for the sake of content, with no review, no genuine opinion behind it, no human deciding "yes, this is what I actually think."
The question isn't whether AI was involved. It's whether the result represents something real.
The right question to ask
Instead of asking "Does this sound like AI?", here's a more useful question: "Does this content provide value?"
Does it make a coherent argument? Does it offer useful insight? Does it help me understand something I didn't before? Does it take a stance and defend it?
If yes, who cares how it was produced?
If no, the production method is irrelevant anyway.
The fixation on stylistic authenticity is a distraction from what actually matters: the quality of the thinking, not the texture of the prose.
My awkward position
I don't have a tidy solution here. I'm not going to deliberately introduce errors into my writing to seem more human. That feels like a betrayal of everything I've learned about clear communication.
I'm also not going to stop using grammar tools. They're not optional for me - they're how I produce readable text despite a brain that genuinely struggles with written language.
What I will do is continue writing in my natural voice - which happens to be formal, structured, and clean. If that triggers AI detectors, so be it. The alternative is pretending to be less capable than I am, and I'm not willing to do that.
The real conversation we should be having
The emergence of AI writing tools is genuinely changing how we create and consume content. That's worth discussing seriously.
What's not worth discussing is whether someone's prose style is sufficiently imperfect to qualify as authentically human.
The valuable questions are about disclosure, attribution, and whether AI assistance was appropriate for a given context. The unhelpful questions are about whether someone's writing is "too good" to be real.
We can do better than this.
And for what it's worth, an AI helped me write this post. I gave it my opinions from a LinkedIn thread, reviewed what it produced, and decided it accurately represents what I think.
If that makes the content less valid in your eyes, you've rather proved my point.