**Algorithmic Law - Project/ You said: How should this project go, other than the quiz I want it to talk about AI and the signs, why it comes as weird and etc.** How should this project go, other than the quiz I want it to talk about AI and the signs, why it comes as weird and etc. Whatd you think let's brainstormApr 20 **Claude responded: Happy to brainstorm.**Prepared conversational brainstorm on AI and cultural perceptionsPrepared conversational brainstorm on AI and cultural perceptions Happy to brainstorm. Let me throw out how I'd actually think about this, because "signs of AI writing" is a richer topic than most people give it credit for. The core thesis worth leading with Most "AI writing sounds weird" takes focus on surface tells (em dashes, "delve," "tapestry"). But the deeper reason it sounds weird is structural: language models are trained to produce text that's plausible to the widest possible audience, which means they default to the average of how something might be written. Real writing has a voice because a specific person made specific choices. AI writing often reads like it was written by a committee of cautious editors — technically fine, oddly hollow. If you lead with that frame, all the specific tells become examples of the same underlying thing, rather than a random list of pet peeves. A structure that could work beyond the quiz You could break the project into three layers that each answer a different question: The "what" — a taxonomy of tells. I'd group them rather than list them flat, because that's more memorable: • Vocabulary tells (the overused words, the weirdly formal register) • Structural tells ("It's not just X — it's Y," the obligatory three-part list, the tidy wrap-up paragraph, the "In today's fast-paced world" opener) • Rhetorical tells (hedging everything, refusing to have opinions, false balance, compulsive positivity) • Formatting tells (bolded keywords sprinkled at random, headers on things that don't need headers, bullet points for ideas that should flow as sentences) The "why" — this is where most treatments of the topic are weakest and where yours could stand out. Why do these patterns emerge? A few threads worth pulling: • Training optimizes for being broadly acceptable, not interesting. Specificity is risky; vagueness is safe. • RLHF (training on human preference) pushes models toward a kind of customer-service register — enthusiastic, helpful, never confrontational. • Models don't have lived experience, so metaphors feel sourced from a library rather than noticed in the world. Compare "the negotiation was tense" (AI) to a real writer describing the specific silence after someone said the number. • No real stakes. A human writer has to commit to an argument because they're the one staking their reputation. An AI has nothing to lose, so it hedges. The "so what" — the interesting implications. A few angles: • The tells shift over time. "Delve" was a dead giveaway two years ago; now the labs have trained it down and new tells emerge. Any list is a snapshot. • Some "AI tells" are just "bad writing tells." A lot of what makes AI text sound AI is what would make a bored intern's text sound bad. The overlap is worth examining. • There's a meta-loop: as humans learn the tells, some humans start avoiding em dashes and tricolons to seem less AI-like, which is kind of insane if you think about it. • Detection is getting harder, not easier. The best AI writing now passes most humans. The reliable tells are fading. Things that would make it feel non-generic Side-by-side examples, not descriptions. "AI uses vague language" is forgettable. Showing a paragraph of AI-generated writing about grief next to Joan Didion is devastating. A section on when AI writing actually works well — summaries, instructions, boilerplate. The weirdness is most pronounced when the task calls for voice, and least pronounced when it calls for neutrality. That nuance will make the whole project more credible. An honest note about the arms race. Anything you write today will be partly wrong in a year. Naming that upfront is disarming. One thing to decide early Is your audience "people who want to detect AI writing" or "writers who want to not sound like AI"? The material overlaps but the emphasis differs — detectors care about reliable signals, writers care about what to do instead. You can do both, but picking a primary lens will sharpen everything. Want me to go deeper on any of these? I could draft specific examples for the taxonomy, or dig into the training dynamics, or help you think through the quiz so it ties into the rest. You said: For my purposes, I would like to do both For my purposes, I would like to do bothApr 20 **Claude responded: Good — and doing both is actually easier than it looks if you stop treating them as separate sections and start treating each tell as a two-sided coin.**Architected integrated framework balancing detection and writing guidanceArchitected integrated framework balancing detection and writing guidance Good — and doing both is actually easier than it looks if you stop treating them as separate sections and start treating each tell as a two-sided coin. Here's what I mean. Instead of "Part 1: Detection / Part 2: Writing advice," each sign of AI writing gets treated the same way:
<article class="sign"> blocks — copy one, paste it below, change the number and text. To change quiz questions, find the quizData array in the <script> section near the bottom — each question is an object with passage, answer (either "AI" or "Human"), and explanation. Add or remove entries freely.
Getting it online
For free, no-signup hosting: rename the file to index.html, then drop it onto Netlify Drop — it'll give you a live URL in about ten seconds. If you want something more permanent, free options in order of how much setup they need: GitHub Pages (free, needs a GitHub account, good if you want version control), Cloudflare Pages (free, slightly more technical), Vercel (free, polished).
If this is for a school project and just needs to be viewable, Netlify Drop is the fastest path. If it's for something ongoing, I'd use GitHub Pages so you can update it easily.
Want me to add anything before you ship it — more quiz questions, a seventh sign, a meta note about how this site was made, anything else?Signs of ai writingCode · HTML
You said: the quiz should be this quizai_writing_quiz.jsxjsx
the quiz should be this quizApr 20
**Claude responded: Got it — your quiz is better than mine (the categorized tells with hover tooltips and color-coded dots are really nice).**Architected unified site redesign around existing quiz aestheticArchitected unified site redesign around existing quiz aesthetic
Got it — your quiz is better than mine (the categorized tells with hover tooltips and color-coded dots are really nice). Let me port it into the site and unify the whole thing around the aesthetic you already established in the quiz.Created 2 files, ran a command, read a fileCreated 2 files, ran a command, read a file
Done. Your quiz is now fully integrated — all six passages, the color-coded categories, hover tooltips on the highlighted segments, the tells-section breakdown, the Gorrie quote at the end, everything.
What changed from the previous version
I rewrote your React component as vanilla JavaScript so it runs in a single HTML file with no build step. The behavior is identical to the JSX: alternating passages, highlights appearing after you guess, the four color-coded categories, the green/red verdict panel, the quiz-again button.
I also pulled the whole site's aesthetic into alignment with what you'd already designed in the quiz — same palette (#F2EAD8 paper, #8B2A1F oxblood, #7a5a42 muted brown), same font stack (Fraunces for display, Source Serif 4 for body, JetBrains Mono for metadata). The Signs and Sources tabs now match the quiz instead of drifting into a slightly different design.
One small addition I made — tell me if you want it removed
On the Signs tab, the first four signs now have a small colored dot next to them showing which quiz category they map to (vocabulary → yellow, rhetoric → pink, puffery → blue, stock phrases → green). It ties the two tabs together so readers see the same schema in both places. If that feels like too much cross-referencing, I can pull the dots out.
Editing the quiz later
The quiz data lives in the PASSAGES array in the <script> section at the bottom of the file. To add a passage, copy one of the existing objects and paste it in. For an AI passage, break the text into segments where tell-containing chunks get a c (category) and x (explanation). For a human passage, just use text plus a humanNote. The source placeholders are there so you can swap in real citations once you have them — they'll render automatically under the verdict.
Hosting
Same as before — for the fastest path, rename the file to index.html and drop it onto Netlify Drop. You'll have a live URL in under a minute.
Want me to help with anything else — writing real passages to replace the placeholders, drafting source citations for the bibliography, or adding a seventh sign?Signs of ai writingCode · HTML
You said: you see that A, it should be an AI in that font
you see that A, it should be an AI in that fontApr 21
**Claude responded: Fixed.**Deliberated drop cap consistency across multiple interface sectionsDeliberated drop cap consistency across multiple interface sections
Fixed. The drop cap now renders "AI" together in Fraunces italic oxblood — so it reads "AI writing reads like AI writing..." instead of the "A I" split you were seeing.
Technical note for your own future edits: the old version used the ::first-letter CSS pseudo-element, which can only style a single character. To do multi-letter drop caps you have to wrap the letters in an actual span. The new CSS is a .dropcap class you can use anywhere — I already applied it to both the Signs and Sources tab ledes.Signs of ai writingCode · HTML
Claude is AI and can make mistakes. Please double-check responses.