**Algorithmic Law - Project/
You said: This is my project as of current, any critiques?Signs of AI Writing (standalone src).htmlhtmlSigns of AI Writing.htmlhtmlstyles.csscssdemos.js**
216 linesjsquiz-data.js
94 linesjsquiz.js
174 linesjs
This is my project as of current, any critiques? should I take this to cowork?Apr 21
**Claude responded: I read through everything — the HTML, the three JS files, the CSS, and cross-checked against your sources (Gorrie and the Wikipedia field guide).**Evaluated polished project, identified specific critiques, sought clarification on intentEvaluated polished project, identified specific critiques, sought clarification on intent
I read through everything — the HTML, the three JS files, the CSS, and cross-checked against your sources (Gorrie and the Wikipedia field guide). Real response below. I'll save the Cowork question for the end since I want to make sure I understand what you mean.
The honest take first: this is genuinely good. It's clearly above the bar for a class project. The four-tab flow (taxonomy → quiz → mechanism → speculation) is the right shape for a piece like this — teach the tells, test that they landed, explain why they exist, then ask what happens next. The interactivity is load-bearing rather than decorative: the rhythm bars illustrate a point you've just made in prose, the morph demo shows transformation in the same spot on the page, and the quiz's commit-before-reveal pattern is pedagogically correct. The typography is elevated without being precious, and — notably for a piece about AI prose — your own writing mostly resists the register it's critiquing. "The first Tuesday after" as the anti-example to "navigating the complex landscape of grief" is a very human sentence to choose.
Now the critiques, roughly in order of how much I think they'd improve the piece:
1. The quiz risks making readers overconfident. This is the big one. The Wikipedia source you're drawing from is explicit: casual LLM users score barely above chance on real AI detection, and false accusations have a real cost (driving off new writers, poisoning class discussions). Your quiz, by contrast, uses hand-picked obvious cases — 3 AI passages that pile stands as a testament to onto rich cultural heritage onto seamlessly blend onto truly unforgettable in a single paragraph. Readers will walk out of the quiz with 5/6 or 6/6 and a confidence level the research says is unwarranted. A short framing line before the quiz ("These are cherry-picked — real detection is harder, and accusing someone wrongly costs something") would fix most of this, and so would one or two "edge" passages: AI that's been edited, or human writing that happens to hit two tells. The coda about tells shifting is good but reads as a disclaimer rather than a caution.
2. The "delve" chart is your weakest empirical moment. You say in the comment it's illustrative/fabricated, and it is labeled as such in the chart-sub, but you're making a piece whose whole argument depends on empirical claims about machine prose. Using made-up data for the one quantitative moment is a tonal mismatch. Real numbers exist — the Geng & Trotta arXiv paper (2024), the Washington Post November 2025 analysis, Rudnicka's Scientific American piece. Even rounded approximations of actual findings would be stronger than a plausible-shaped curve. If you keep the fabricated data, I'd make the caveat louder, not quieter.
3. The caveats section is thin. The Wikipedia source has a whole "ineffective indicators" section — perfect grammar, formal tone, em-dashes in isolation, transition words in isolation. These are the things readers will incorrectly use as tells after reading your piece. Naming them as red herrings would protect the piece from producing exactly the hair-trigger AI-accusers the Wikipedia source is warning about. The em-dash omission in particular is odd given how culturally famous that tell has become (and how it's been partially trained out in 2025).
4. Structural asymmetries. Signs #1–4 have "used in quiz" pills with colored dots; Signs #5–6 just have numbers. A reader will notice the asymmetry and assume #5–6 are somehow lesser. Either add "not in quiz" pills (ugly) or restructure into two groups with a header ("Four quizzable tells" / "Two ambient ones"). Also: your demos.js comment says the morph demo is "on the Intro tab" but it's on the Why tab — suggests you reshuffled the IA late and there may be other stale references. The "Interactive throughout" promise in the sub-meta is slightly overclaimed for three demos + one quiz across four tabs.
5. Replay value. The quiz resets to the same six passages in the same order. For a piece people might share and retake, even a simple shuffle would help — and a larger bank (10–12 passages, sample 6) would let the quiz teach different lessons on different runs.
6. Small things. The title-attribute tooltips on highlighted phrases don't work on mobile (no hover) and aren't great on keyboard — a click-to-expand or always-visible legend would be better. Color contrast on the yellow/pink highlights over the cream background is borderline for WCAG AA and probably fails in the contrast theme. The tweaks panel is a lovely dev tool but its activation path (postMessage from parent) is opaque if it's meant to be user-facing.
7. One meta-note on voice. "It's nobody's actual voice. It's the voice humans asked for, in aggregate" — that's the "not X; it's Y" antithesis you're critiquing on the same page. I actually think it's fine (possibly even winking), but a close reader will catch it. If it's deliberate, maybe lean in and flag it. If it's not, rewriting one or two of those would tighten the piece.
On Cowork — I want to make sure I answer the right question. There's an Anthropic beta product called Cowork, but it's a desktop tool for non-developers to automate file and task management, which doesn't map cleanly onto a web essay like this. Do you mean:
Once I know which you mean I can give you a straight answer on whether it's the right tool/venue and what I'd prepare before taking it there.
You said: Q: What do you mean by "take this to Cowork"?
Q: What do you mean by "take this to Cowork"?
A: Anthropic's Cowork product (desktop file/task automation)Apr 21
**Claude responded: Now I have the actual picture (I was working from an outdated mental model).**Evaluated Cowork's fit for web project iteration needsEvaluated Cowork's fit for web project iteration needs
Now I have the actual picture (I was working from an outdated mental model). Cowork is Claude working directly with your files, folders, and apps — reading, editing, and producing real outputs on your machine. Anthropic Courses It's built on the same agentic architecture as Claude Code but packaged for non-coding work, and it conn Claudeects to local files and applications, and completes multi-step tasks from start to finish. It also has a Chrome integration for browser automation.
So the honest answer: it would work, but it's not the most natural fit for where your project is. A few specific thoughts:
Where Cowork would genuinely help you here:
• Cross-file editing. You have HTML + CSS + three JS files + two source documents (the Wikipedia PDF and the Gorrie piece). Iterating across that spread is exactly the kind of thing that's painful in a chat window and native to Cowork.
• Tackling the "fake chart data" critique. Cowork could fetch the Geng & Trotta arXiv paper, the WaPo analysis, and the Rudnicka Scientific American piece, extract real numbers, and update the DELVE_DATA array in demos.js in one session. That single fix would meaningfully strengthen the piece.
• Expanding the quiz bank. Drafting 6–8 new passages — including the harder edge cases I flagged — and injecting them into quiz-data.js is a well-shaped Cowork task.
• Browser testing via Chrome integration for the mobile/accessibility fixes.
Where it's overkill or mismatched:
• The core remaining work on this piece is editorial — rewriting the intro, adding caveats, tightening voice. That's just chat work and you can do it right here. Cowork doesn't add much when the bottleneck is "what should this paragraph say."
• Cowork's canonical sweet spot is repetitive, multi-step knowledge work — synthesize this pile of PDFs, organize this folder, run this weekly task. Your project is a one-off creative piece. You'd be using Cowork for its file-access capabilities, not its agentic loop.
• If you're comfortable with a terminal, Claude Code is actually more native to coding iteration than Cowork is. Cowork is Claude Code's capabilities wrapped in a non-developer UI.
My actual recommendation: If you already have Cowork access, take it there for one focused session aimed at the two concrete critiques that benefit most from file-level work — replacing the fabricated chart data with real numbers, and doubling the quiz bank with harder cases. For the editorial fixes (caveats, intro, the overconfidence framing), stay in chat. Don't migrate the whole project over just because you can; migrate the tasks that actually need filesystem access.
One note: Cowork is in research preview and, by Anthropic's own acknowledgment, still has rough edges — so give yourself a little buffer if you're on a deadline.
Claude is AI and can make mistakes. Please double-check cited sources.
DELVE_DATA array in demos.js in one session. That single fix would meaningfully strengthen the piece.quiz-data.js is a well-shaped Cowork task.