**Algorithmic Law - Project/ You said: How should this project go, other than the quiz I want it to talk about AI and the signs, why it comes as weird and etc.** How should this project go, other than the quiz I want it to talk about AI and the signs, why it comes as weird and etc. Whatd you think let's brainstormApr 20 **Claude responded: Happy to brainstorm.**Prepared conversational brainstorm on AI and cultural perceptionsPrepared conversational brainstorm on AI and cultural perceptions Happy to brainstorm. Let me throw out how I'd actually think about this, because "signs of AI writing" is a richer topic than most people give it credit for. The core thesis worth leading with Most "AI writing sounds weird" takes focus on surface tells (em dashes, "delve," "tapestry"). But the deeper reason it sounds weird is structural: language models are trained to produce text that's plausible to the widest possible audience, which means they default to the average of how something might be written. Real writing has a voice because a specific person made specific choices. AI writing often reads like it was written by a committee of cautious editors — technically fine, oddly hollow. If you lead with that frame, all the specific tells become examples of the same underlying thing, rather than a random list of pet peeves. A structure that could work beyond the quiz You could break the project into three layers that each answer a different question: The "what" — a taxonomy of tells. I'd group them rather than list them flat, because that's more memorable: • Vocabulary tells (the overused words, the weirdly formal register) • Structural tells ("It's not just X — it's Y," the obligatory three-part list, the tidy wrap-up paragraph, the "In today's fast-paced world" opener) • Rhetorical tells (hedging everything, refusing to have opinions, false balance, compulsive positivity) • Formatting tells (bolded keywords sprinkled at random, headers on things that don't need headers, bullet points for ideas that should flow as sentences) The "why" — this is where most treatments of the topic are weakest and where yours could stand out. Why do these patterns emerge? A few threads worth pulling: • Training optimizes for being broadly acceptable, not interesting. Specificity is risky; vagueness is safe. • RLHF (training on human preference) pushes models toward a kind of customer-service register — enthusiastic, helpful, never confrontational. • Models don't have lived experience, so metaphors feel sourced from a library rather than noticed in the world. Compare "the negotiation was tense" (AI) to a real writer describing the specific silence after someone said the number. • No real stakes. A human writer has to commit to an argument because they're the one staking their reputation. An AI has nothing to lose, so it hedges. The "so what" — the interesting implications. A few angles: • The tells shift over time. "Delve" was a dead giveaway two years ago; now the labs have trained it down and new tells emerge. Any list is a snapshot. • Some "AI tells" are just "bad writing tells." A lot of what makes AI text sound AI is what would make a bored intern's text sound bad. The overlap is worth examining. • There's a meta-loop: as humans learn the tells, some humans start avoiding em dashes and tricolons to seem less AI-like, which is kind of insane if you think about it. • Detection is getting harder, not easier. The best AI writing now passes most humans. The reliable tells are fading. Things that would make it feel non-generic Side-by-side examples, not descriptions. "AI uses vague language" is forgettable. Showing a paragraph of AI-generated writing about grief next to Joan Didion is devastating. A section on when AI writing actually works well — summaries, instructions, boilerplate. The weirdness is most pronounced when the task calls for voice, and least pronounced when it calls for neutrality. That nuance will make the whole project more credible. An honest note about the arms race. Anything you write today will be partly wrong in a year. Naming that upfront is disarming. One thing to decide early Is your audience "people who want to detect AI writing" or "writers who want to not sound like AI"? The material overlaps but the emphasis differs — detectors care about reliable signals, writers care about what to do instead. You can do both, but picking a primary lens will sharpen everything. Want me to go deeper on any of these? I could draft specific examples for the taxonomy, or dig into the training dynamics, or help you think through the quiz so it ties into the rest. You said: For my purposes, I would like to do both For my purposes, I would like to do bothApr 20 **Claude responded: Good — and doing both is actually easier than it looks if you stop treating them as separate sections and start treating each tell as a two-sided coin.**Architected integrated framework balancing detection and writing guidanceArchitected integrated framework balancing detection and writing guidance Good — and doing both is actually easier than it looks if you stop treating them as separate sections and start treating each tell as a two-sided coin. Here's what I mean. Instead of "Part 1: Detection / Part 2: Writing advice," each sign of AI writing gets treated the same way:

  1. Name the tell
  2. Show it in the wild (example)
  3. Explain why it happens (the training/incentive reason)
  4. How to spot it (for detectors)
  5. What the fix actually is (for writers) The structure is the same every time, so it's easy to follow, and it naturally serves both audiences without making one feel like an afterthought. Where this gets genuinely useful is in the "fix" column Most writing advice on this topic is surface-level — "don't use em dashes, don't start with 'In today's world.'" That advice is close to useless because it treats symptoms. A human writer who carefully avoids em dashes still sounds like AI if they're hedging every claim and summarizing at the end of every paragraph. The real fixes are deeper, and they're interesting to write about: • Vagueness → specificity. AI writes "the negotiation was tense." A human writes "nobody spoke for eleven seconds after he said the number." The fix isn't a vocabulary swap; it's a commitment to noticing. • Hedging → opinion. AI won't tell you whether a book is good. It'll tell you the book "has been praised by some while others have raised concerns." The fix is being willing to be wrong in public. • Uniform rhythm → varied rhythm. AI tends to produce sentences of similar length and structure. Real prose has short punchy sentences next to long winding ones. Read any paragraph aloud — if it sounds metronomic, it reads AI. • Wrap-up compulsion → trust the reader. AI ends almost every response with a summary or gesture toward broader significance. Human writers let paragraphs end where the thought ends. • Safety → friction. AI avoids anything awkward, ugly, or too personal. Real writing has texture — a weird word choice, an unflattering admission, a joke that doesn't quite land. The fix is tolerating your own roughness. That last one is the big one, and it's worth its own section. A lot of what makes AI writing feel AI is that it's been sanded. Every surface is smooth. Real writing has burrs. Where the quiz fits I'd place it as a pivot between the two modes. You spend the first half teaching people to recognize tells. The quiz tests that. Then you turn the camera around: "Okay, now that you can see them, here's what to do about them in your own writing." That way the quiz is load-bearing — it's the moment of competence that earns the transition, not just a fun activity tacked on. One thing worth including that most people skip A section on how detection and avoidance are in tension. The more people learn the tells, the more some writers over-correct and strip their prose of anything that could look AI — which often means stripping out em dashes, tricolons, and semicolons they'd have used anyway. You end up with humans writing worse to prove they're human. That's a genuinely weird cultural moment worth naming. Want me to start drafting any of these sections, or sketch what the taxonomy looks like once you flesh it out into the full two-sided structure? You said: So how do I put this all onto a website, I want the website to hve three tabs, the sources, the quiz and the signs with how it sounds weird So how do I put this all onto a website, I want the website to hve three tabs, the sources, the quiz and the signs with how it sounds weirdApr 20 **Claude responded: I'll build you a working starter site you can customize, then give you the hosting options.**Architected editorial website with interactive quiz and typography-driven designArchitected editorial website with interactive quiz and typography-driven design I'll build you a working starter site you can customize, then give you the hosting options. Let me get it laid out first.Devised accessible hosting guidance for non-technical userDevised accessible hosting guidance for non-technical user Download it and open it in any browser — everything works out of the box. Here's what's in it and how to get it live. What's in the file One self-contained HTML file with all three tabs working. I went for an editorial/print-magazine aesthetic on purpose — it would have been ironic to build a site about AI writing in the same sanded-smooth style the content critiques. So: cream paper background, oxblood accent, Fraunces serif italics for display, a subtle paper-grain texture. The Signs tab has the six tells we discussed, each with a Machine/Human side-by-side example and the three-part breakdown (why it happens, how to spot it, how to fix it). The Quiz tab has six working questions — each shows you a passage, you guess AI or Human, and the feedback explains exactly which tells are present. It scores you at the end and lets you retake. The Sources tab has real starter citations plus placeholder slots you can fill in. Editing the content Open the file in any text editor (even Notepad works). To change a sign or add new ones, find the <article class="sign"> blocks — copy one, paste it below, change the number and text. To change quiz questions, find the quizData array in the <script> section near the bottom — each question is an object with passageanswer (either "AI" or "Human"), and explanation. Add or remove entries freely. Getting it online For free, no-signup hosting: rename the file to index.html, then drop it onto Netlify Drop — it'll give you a live URL in about ten seconds. If you want something more permanent, free options in order of how much setup they need: GitHub Pages (free, needs a GitHub account, good if you want version control), Cloudflare Pages (free, slightly more technical), Vercel (free, polished). If this is for a school project and just needs to be viewable, Netlify Drop is the fastest path. If it's for something ongoing, I'd use GitHub Pages so you can update it easily. Want me to add anything before you ship it — more quiz questions, a seventh sign, a meta note about how this site was made, anything else?Signs of ai writingCode · HTML  You said: the quiz should be this quizai_writing_quiz.jsxjsx the quiz should be this quizApr 20 **Claude responded: Got it — your quiz is better than mine (the categorized tells with hover tooltips and color-coded dots are really nice).**Architected unified site redesign around existing quiz aestheticArchitected unified site redesign around existing quiz aesthetic Got it — your quiz is better than mine (the categorized tells with hover tooltips and color-coded dots are really nice). Let me port it into the site and unify the whole thing around the aesthetic you already established in the quiz.Created 2 files, ran a command, read a fileCreated 2 files, ran a command, read a file Done. Your quiz is now fully integrated — all six passages, the color-coded categories, hover tooltips on the highlighted segments, the tells-section breakdown, the Gorrie quote at the end, everything. What changed from the previous version I rewrote your React component as vanilla JavaScript so it runs in a single HTML file with no build step. The behavior is identical to the JSX: alternating passages, highlights appearing after you guess, the four color-coded categories, the green/red verdict panel, the quiz-again button. I also pulled the whole site's aesthetic into alignment with what you'd already designed in the quiz — same palette (#F2EAD8 paper, #8B2A1F oxblood, #7a5a42 muted brown), same font stack (Fraunces for display, Source Serif 4 for body, JetBrains Mono for metadata). The Signs and Sources tabs now match the quiz instead of drifting into a slightly different design. One small addition I made — tell me if you want it removed On the Signs tab, the first four signs now have a small colored dot next to them showing which quiz category they map to (vocabulary → yellow, rhetoric → pink, puffery → blue, stock phrases → green). It ties the two tabs together so readers see the same schema in both places. If that feels like too much cross-referencing, I can pull the dots out. Editing the quiz later The quiz data lives in the PASSAGES array in the <script> section at the bottom of the file. To add a passage, copy one of the existing objects and paste it in. For an AI passage, break the text into segments where tell-containing chunks get a c (category) and x (explanation). For a human passage, just use text plus a humanNote. The source placeholders are there so you can swap in real citations once you have them — they'll render automatically under the verdict. Hosting Same as before — for the fastest path, rename the file to index.html and drop it onto Netlify Drop. You'll have a live URL in under a minute. Want me to help with anything else — writing real passages to replace the placeholders, drafting source citations for the bibliography, or adding a seventh sign?Signs of ai writingCode · HTML  You said: you see that A, it should be an AI in that font you see that A, it should be an AI in that fontApr 21 **Claude responded: Fixed.**Deliberated drop cap consistency across multiple interface sectionsDeliberated drop cap consistency across multiple interface sections Fixed. The drop cap now renders "AI" together in Fraunces italic oxblood — so it reads "AI writing reads like AI writing..." instead of the "A I" split you were seeing. Technical note for your own future edits: the old version used the ::first-letter CSS pseudo-element, which can only style a single character. To do multi-letter drop caps you have to wrap the letters in an actual span. The new CSS is a .dropcap class you can use anywhere — I already applied it to both the Signs and Sources tab ledes.Signs of ai writingCode · HTML  Claude is AI and can make mistakes. Please double-check responses.
    1. Name the tell
    1. Show it in the wild (example)
    1. Explain why it happens (the training/incentive reason)
    1. How to spot it (for detectors)
    1. What the fix actually is (for writers)