**Algorithmic Law - Project/ You said: what are signs of AI writing** what are signs of AI writing https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writinghttps://www.deadlanguagesociety.com/p/rhetorical-analysis-ai how can i expand and make this project cooler?Apr 20 **Claude responded: Two solid starting sources.**Brainstormed eight concrete project expansions spanning empirical, rhetorical, and ethical anglesBrainstormed eight concrete project expansions spanning empirical, rhetorical, and ethical angles Two solid starting sources. The Wikipedia page gives you a field guide to surface tells (vocabulary, punctuation, stock phrases), and Gorrie reframes those same tells as classical rhetorical techniques overused without taste. That's a good skeleton already. Here are directions to thicken it up: Build your own little detector. A short Python script that scores text for AI-likeness: count em dashes per 1,000 words, flag phrases from a watchlist ("delve," "tapestry," "rich history," "it's worth noting"), measure sentence-length variance (AI tends toward uniform lengths), tally tricolons. Then test it on known-human and known-AI samples. The interesting part is where it fails — what it flags that's actually fine, and what it misses. Run the same prompt through multiple models. Ask GPT-5, Claude, Gemini, and Llama to write the same cover letter or product description. The differences between AI "voices" are real and underdiscussed in the main sources. Gemini, GPT, and Claude each have different tics. You'd essentially be doing stylometry on robots. Track how the tells are evolving. The Wikipedia page mentions GPT-5.1 actively suppressing em dashes now that they're infamous. This is an arms race worth charting as a timeline: 2023 = weird stiff formality, 2024 = peak em-dash era, 2025 = corporate-helpful register, 2026 = ? Show dated screenshots or outputs for each phase. The false-positive problem. This gives the project real stakes. Students get accused of using AI, especially non-native English speakers whose natural writing patterns resemble AI output. There's published research on this (Liang et al., Stanford, showed GPT detectors falsely flag non-native writing at very high rates). Turnitin and similar tools have well-documented disasters. This adds a social dimension rather than pure stylistics. Extend Gorrie's classical-rhetoric mapping. He covers parallelism, antithesis, and tricolon. There's a much longer list available: anaphora (repeating sentence openings), isocolon (equal-length phrases), polysyndeton (overuse of "and"), chiasmus, epistrophe. Build a chart: AI tell → Greek name → famous human example → AI example. That turns your project into a bridge between a rhetoric class and a computer science class, which is a rare combination. Take the "taste" question seriously as its own section. Gorrie's real punchline is that LLMs have the techniques but lack taste. What is taste? Can it be learned from more data, or does it require something models structurally can't have (judgment about situation, audience, restraint)? This is a genuine open question and worth a short essay. Make an interactive "AI or Human?" quiz. Ten to twenty short paragraphs, some human and some AI, including tricky cases: very formal human writing, AI writing that's been human-edited, non-native speakers writing naturally. Have friends take it. Record scores. The confusion matrix is the interesting output. Build a museum of canonical blunders. There are famous AI-in-the-wild incidents worth collecting: the published paper with "Certainly, here is a rewrite of your introduction" left in the text, the Amazon products named "I'm sorry, I cannot fulfill that request," journalists caught fabricating sources, academic papers with hallucinated citations. A little gallery of these is genuinely fun and teaches the tells concretely. You probably don't want to do all of these. A strong combination would be the detector plus the cross-model comparison plus the false-positive angle (that's a sciencey project), or the classical-rhetoric mapping plus the taste essay plus the canonical blunders (that's a humanities project). What's the context for this — class project, personal site, something else? And how much time do you have? I can help you actually build any of these: the detector script, the quiz page, the timeline visual, or the rhetoric-mapping chart. You said: i have 6 days i have 6 daysApr 20 **Claude responded: Six days is enough to do something genuinely cool, but not enough to do everything.**Weighed scope constraints and formulated targeted clarification questionsWeighed scope constraints and formulated targeted clarification questions Six days is enough to do something genuinely cool, but not enough to do everything. Let me pin down two things so I can give you a real day-by-day plan instead of generic advice. Tap your answers above and I'll draft a day-by-day plan for the six days. You said: Q: What's this project for? Q: What's this project for? A: Class assignment Q: How comfortable are you with code? A: I can follow along Q: Which flavor excites you most? A: A mix of all threeApr 20 **Claude responded: Good mix.**Architected six-day project plan combining code, essay, and interactive elementsArchitected six-day project plan combining code, essay, and interactive elements Good mix. Here's a realistic six-day plan that lands on a single HTML page you can show in class. Bundling the essay, detector, quiz, and museum into one thing is more interesting than any of them alone, and demo-able beats turned-in. The target shape: a scrollable single-page site called something like Signs of AI Writing, with six sections: (1) short essay drawing on Wikipedia + Gorrie, (2) a rhetoric map with side-by-side human/AI examples, (3) an interactive "paste text, get an AI-likeness score" tool, (4) a "Human or AI?" quiz with 8–10 paragraphs, (5) a small museum of AI blunders in the wild, (6) closing reflection on "taste." That hits all three flavors in one artifact. Day 1 — gather raw material. Re-read both sources and take notes. Collect 8–10 paragraph samples for the quiz: some clearly AI, some human, some tricky (formal human writing, AI that's been edited by a human). Find 5–7 famous AI-in-the-wild blunders with sources (the Amazon products named "I cannot fulfill this request," academic papers with "Certainly, here is a rewrite…" left in, etc.). Run one identical prompt through 2–3 models you can access and save the outputs for the cross-model angle. Day 2 — write the essay and rhetoric map. A 400–600 word intro: what the tells are, why they exist, and Gorrie's reframing. Then build a table of ~6 rhetorical techniques (parallelism, antithesis, tricolon, anaphora, polysyndeton, isocolon), each with the Greek name, a famous human example, and an AI example. Draft the closing reflection on taste (200–300 words). Day 3 — build the detector. I can write you a JavaScript snippet that takes pasted text and scores it on em dash density, watchlist phrase hits, sentence-length variance, and a tricolon heuristic. You run your samples through it, record scores, and note the interesting failures. Day 4 — build the quiz. Simple HTML/JS: show paragraph, user guesses Human or AI, reveal the answer with a one-line explanation pointing to the tell. I'll scaffold it; you fill in the 10 items from Day 1. Day 5 — combine and polish. Merge everything into one HTML page with decent styling. Test that the detector and quiz work without bugs. Write short captions for the museum items. Day 6 — demo-test, fix, prepare. Have a friend take the quiz and watch where they get confused. Fix whatever breaks. Prep a 3–5 minute walkthrough if you need to present. One thing worth confirming before Day 1: does your class require a specific format (paper, slides, website)? If it has to be a written paper, we'd swap the website for a document with screenshots of the detector/quiz as figures, and the rhetoric-map + taste essay become the core. Same ingredients, different plating. Want me to kick off Day 1 right now by pulling together the AI-blunder museum (I can find 6–7 documented incidents with sources), or jump ahead and scaffold the detector code so you can see the shape of it? You said: well actually I want a website as the medium to present, with more or less a short quiz that gives you a text and you click if its AI or not, and what it does … well actually I want a website as the medium to present, with more or less a short quiz that gives you a text and you click if its AI or not, and what it does is if it tells you that you are write or wrong and highlights if AI what about the text indicates that it's AIApr 20 **Claude responded: Perfect concept — tight, focused, demo-able.**Architected interactive quiz site with AI detection highlightingArchitected interactive quiz site with AI detection highlighting Perfect concept — tight, focused, demo-able. Let me build a working version now that you can see running, then we iterate on passages and polish. Here's what I'm making: a React single-page quiz where you see a passage, click "AI" or "Human," and get immediate feedback. When a passage is AI-generated, the problematic phrases get highlighted in different colors by category (vocabulary tells, rhetorical overuse, puffery, etc.) with hover or inline explanations. Running score at the top, summary at the end. Let me check the design skill first so it doesn't come out looking like, well, AI slop.Architected editorial-styled AI detection quiz with highlights and explanationsArchitected editorial-styled AI detection quiz with highlights and explanations Committing to an "editor's desk" aesthetic — cream paper, serif typography, highlighter-marker style for the tells, deep red accents. That's the opposite of AI slop visually, which is a nice meta-touch.Created a file, read a fileCreated a file, read a file Working V1 is above. Six passages (three AI, three human), alternating. Click a button, get the verdict, and if it's AI the tell phrases light up in four colors by category with explanations below: • Yellow — AI vocabulary (delve, testament, crucial, etc.) • Pink — Rhetorical overuse (antithesis, tricolon, parallelism) • Blue — Vague grandiosity (truly remarkable, unique value proposition) • Green — Stock openers and transitions For human passages I included a note explaining why it reads as human, partly to handle the "but it has an em dash!" false-positive case — teaches the same lesson from the other direction. Ends with a score screen and the Gorrie quote on taste. A few things to check on your end:
{ label: "...", url: "..." } — when a URL is present it renders as a clickable red link; when null it shows as italicized placeholder text. All six current passages are honestly marked as placeholders.
To swap in real content, edit each passage's source to something like:jssource: { label: "ChatGPT, April 2026", url: "<https://chat.openai.com/share/abc123>..." }
or for human:js
source: { label: "The New York Times, Jan 12 2026", url: "<https://www.nytimes.com/>..." }
Quickest path to real content from here: