Eric Walker · 13, July 2025
When Anthropic quietly unveiled Claude Code this spring, few expected it to snowball into a phenomenon. Yet in just four months the tool has pulled in 115 000 developers and is now touching an eye-popping 195 million lines of code every single week. By back-of-the-envelope math, that usage translates into an annual run-rate of roughly $130 million, or more than $1 000 in revenue per developer per year.
Developer-turned-investor Deedy captured the mood on X: “Claude Code Opus is a junior software engineer.” It’s not hyperbole. While most AI coding aids still focus on autocompleting snippets, Claude Code already handles the day-to-day grunt work a real human junior would tackle—reading unfamiliar repositories, refactoring stubborn modules, and drafting entire pull requests.
Early adopters love the speed boost but bristle at the occasional overconfidence. One engineer confessed that last week’s 195 million-line figure needs “deduplication” because Claude sometimes fixes its own earlier fixes—gobbling extra context tokens and time along the way. With the Opus tier capped at about $200 per month, the cash pain stays tolerable, yet the wasted review cycles still sting.
That ambivalence shows up in countless online anecdotes:
Claude isn’t just accelerating work; it’s changing it. Teams now debate whether to pair-program with an AI they don’t fully trust or to let it loose and manually audit the flood of proposed changes afterward.
Free Claude available on GlobalGPT, an all-in-one AI platform.
OpenAI's API available at OpenAI developer platform.
On the opposite end of the hype curve sits Cursor, the venture-backed editor once favored for its slick dev-experience and generous API allowances. Cursor advertised that a $20 subscription effectively unlocked $100 worth of Anthropic API credits—until the economics snapped.
Three weeks of unexpected overages forced the company to issue a mea culpa, refund surprise charges, and throttle usage. Developers labeled the move a betrayal of trust. Screenshots of Cursor’s hastily edited blog post—original version promising “unlimited” access, updated version dotted with caveats—made the rounds on social feeds, fanning the outrage.
Cursor’s plight exposes a structural problem: middlemen who pay list price for model tokens can’t out-discount the companies that own the models. Anthropic can afford to subsidize heavy users on its Max plan because its marginal cost is electricity and silicon. Cursor, by contrast, hands over real dollars for every prompt. As one commentator quipped, “You can’t beat Claude by selling Claude.”