**Algorithmic Law - Project/ You said: Why does AI writing give that uncanny valley feeling** Why does AI writing give that uncanny valley feelingApr 20 **Claude responded: The uncanny feeling comes from a specific mismatch: AI writing has the surface features of thought without the underlying trail of thinking.**Excavated the cognitive gap between human and artificial writing patternsExcavated the cognitive gap between human and artificial writing patterns The uncanny feeling comes from a specific mismatch: AI writing has the surface features of thought without the underlying trail of thinking. Good human writing is the visible record of a mind working — noticing, deciding, being surprised, rejecting one framing and landing on another. You can feel the pressure of those choices in the prose. AI reproduces the artifacts of that process (the transitions, the qualifications, the rhetorical turns) but there's nothing behind them. The form of thought without the fact of it. A few specific symptoms this produces: Hollow specificity. The writing includes details that sound particular but aren't. "Studies have shown" with no study. Descriptions that could apply to anywhere. Compare to how a human writer anchors a scene with some weird, irreducible detail — the exact model of the broken toaster, the way someone's left eye squints more than the right. AI produces specificity-shaped content without the specifics. Tonal evenness. Humans modulate. They get bored mid-paragraph, or excited, or a little hostile. AI maintains an unnatural even pleasantness, every sentence carrying roughly the same emotional weight. It reads like someone who is never quite tired and never quite thrilled. Compulsive tidiness. Every loop closes. Every tension resolves. Every paragraph restates its topic sentence. Real writing has loose ends, things that don't pay off, ideas that don't quite fit. AI writing is too neat for the rough shape of actual thought. The balancing tic. "While X, it's also important to consider Y." Reflexive hedging becomes a rhythm rather than a considered move. A human with an opinion leaks it; AI flattens itself across positions, terrified of landing anywhere. Vocabulary that signals intelligence without doing the work. Tapestry, realm, multifaceted, crucial, nuanced, robust. These words are the verbal furniture of thinking — rearrangeable, reusable, slightly generic. The conclusion instinct. AI wants to summarize, restate, wrap up, gesture at broader implications. Good writing often just stops, or lands somewhere oblique. The compulsion to tie things off comes from pattern-matching to the shape of an essay rather than from having somewhere to arrive. Underneath all of this is a writer-shaped hole. Real writing carries evidence of a particular mind — obsessions, grudges, a few recurring moves the writer can't help making. AI writing is written from nowhere, by no one, in the averaged voice of everyone who has ever written online. That absence is what you feel. No one is home, but the house is beautifully staged. You said: Find sources Find sourcesApr 20 **Claude responded: Here's what backs up the claims in that response, grouped by theme.**Organized sources by thematic claims about AI writingOrganized sources by thematic claims about AI writing Here's what backs up the claims in that response, grouped by theme. The "uncanny valley" concept applied to text. The idea originates with roboticist Masahiro Mori in 1970 and has been extended to writing by multiple recent commentators. TextPolish argues that humans are erratic — varying sentence length, using odd metaphors, carrying a specific inner cadence — while AI is hyper-consistent, making it "the literary equivalent of a face that is perfectly symmetrical" TextPolish. Robert Thompson's Medium essay describes reading a grammatically perfect, professionally toned email and feeling "like talking to a very polite ghost" Medium — the same writer-shaped-hole observation. "Form of thought without the fact of it." This is essentially the stochastic parrot thesis. Bender, Gebru, McMillan-Major, and Mitchell's 2021 FAccT paper describes a language model as a system that haphazardly stitches together sequences of linguistic forms observed in training data based on probabilistic information about how they combine, without reference to meaning Pcdn. Stack Overflow's Ryan Donovan captured the same intuition with his colleague's phrase "just product, no struggle," arguing that AI output is entirely statistical representation of training data and that this is why readers feel disappointed or betrayed when they recognize it Stack Overflow. Tonal evenness / the "retail voice." TextPolish specifically names this: AI defaults to a customer-service tone that's overly helpful, excessively neutral, and devoid of sharp edges, which triggers the reader's fake detector TextPolish. Questie AI's piece adds a related point — emotional flatness masked by emotional words, where the same "excited" enthusiasm appears regardless of context, like a waiter who calls everything "awesome" Questie. Vocabulary that signals intelligence without doing work. This is the best-documented claim empirically. Kobak et al. analyzed over 15 million PubMed abstracts from 2010–2024 and found that LLMs caused an abrupt, unprecedented increase in the frequency of certain style words, estimating at least 13.5% of 2024 abstracts were processed with LLMs — reaching 40% in some subcorpora arXiv. Juzek and Ward's 2025 COLING paper identifies 21 "focal words" — including delve, intricate, and underscore — whose increased occurrence in scientific abstracts is likely the result of LLM usage ACL Anthology. In an experiment emulating RLHF procedures, they found participants reacted much more negatively to "delve" than to other buzzwords Florida State University. Hollow specificity and predictability. Compare the Cloud's piece lists several contributing factors to the uncanny effect: predictability following clichéd patterns, mechanical or insincere feeling from lack of genuine emotional depth, and inconsistencies in tone that feel unnatural Compare the Cloud. Pattern matching rather than reader modeling. TextPolish makes this point directly: humans constantly model the reader's mind — wondering whether a joke will land or a phrase will offend — while AI models the statistical probability of the next word instead TextPolish. The "mismatch" explanation for why it feels visceral. A recent LocalScribe piece frames it as a repeated-exposure problem: natural language invites expectations of understanding, warm tone invites expectations of care, and AI keeps presenting these cues then failing to satisfy them, so repeated breaks in those expectations make the aversion cumulative and visceral LocalScribe. A few caveats worth noting: the marker-word research (delve, intricate, etc.) is about detection, not about why AI writing feels hollow — those are related but distinct claims. And the "stochastic parrot" framing has been contested; Yoav Goldberg and others have pushed back on parts of the original paper. The uncanny-writing literature outside the Kobak/Juzek-Ward papers is mostly essays and blog posts rather than empirical studies — the feeling is well-described but less well-measured than the fingerprint. Claude is AI and can make mistakes. Please double-check cited sources.