Death by a Thousand AI Pull Requests
From Tailwind's 80% revenue drop to tldraw closing external PRs, the open source model is breaking in real-time
The collision between LLMs and open source is creating an existential crisis for projects that once thrived on community participation. Three major developments illustrate this shift in the last week: Tailwind CSS facing a 75% team reduction after an 80% revenue drop as LLMs bypass their documentation paywall, tldraw closing external contributions due to overwhelming AI-generated pull requests, and Bun’s creator experimenting with AI agents for issue management. These cases reveal a fundamental question: Can open source survive when the very tools meant to democratize development end up undermining its economic and collaborative models?
The Revenue Crisis: Tailwind’s Cautionary Tale
I am not a regular listener to Adam Wathan’s podcast but I am a regular user of Tailwind. At first, I was not a fan of the inline CSS editing through classNames, but it grew on me when I started using VS Code and tab completion. Fast forward to today and I don’t even think about the syntax, because I know my AI agent will handle the minutia of remembering it.
But when scrolling X I saw the audio note shared by Adam explaining the crisis they had embarked on, causing him to lay off 3 engineers. Speaking from experience, running a company is not easy—and it’s worse when you need to make hard decisions that affect the livelihoods of humans.
AI is disrupting the economic foundation that made Tailwind possible. The numbers tell a brutal story: a 75% team reduction, an 80% revenue collapse, and the sudden realization that the freemium model built on documentation traffic no longer works. Large language models scraped Tailwind’s freely available docs, memorized the patterns, and now generate Tailwind code directly inside developers’ editors—bypassing the carefully designed conversion funnel that turned documentation readers into Tailwind UI customers. The irony is sharp: Tailwind built tools to make developers more productive, and now AI tools are making those same developers productive enough to avoid paying for Tailwind’s products. Wathan’s pivot to exploring AI-friendly ads in markdown files and rallying community sponsors like Vercel represents an attempt to adapt, but it raises a harder question than he faced when laying off his team—is sponsorship a sustainable business model, or just life support for a project that AI consumption patterns have fundamentally broken?
https://www.reddit.com/r/webdev/comments/1q6n1za/tailwind_just_laid_off_75_of_the_people_on_their/
The Contribution Paradox: tldraw’s retreat
While Tailwind grappled with revenue collapse, tldraw confronted a different crisis entirely—one that struck at the heart of open source collaboration itself.
On January 15th, Steve Ruiz announced that tldraw would begin “automatically closing pull requests from external contributors,” a policy shift that sent shockwaves through the developer community. The reason was brutally simple: AI-generated contributions had overwhelmed the project’s capacity to maintain meaningful code review.
“Like many other open-source projects on GitHub, we’ve recently seen a significant increase in contributions generated entirely by AI tools,” Ruiz explained. “While some of these pull requests are formally correct, most suffer from incomplete or misleading context, misunderstanding of the codebase, and little to no follow-up engagement from their authors.”
The announcement’s tone was apologetic yet unwavering—Ruiz emphasized that “an open pull request represents a commitment from maintainers” that must remain meaningful. His closing words captured the uncertainty facing the entire ecosystem: “This is going to be a weird year for programmers and open source especially.”
Two days later, Ruiz expanded on his thinking in a blog post titled “Stay away from my trash!” The response to his GitHub announcement had been “surprisingly positive”—the problem was real, and other maintainers were considering similar policies. But Ruiz wanted to clarify something crucial: tldraw already accepts code written with AI. He uses AI tools himself. The issue isn’t AI usage—it’s something more fundamental.
“In a world of AI coding assistants, is code from external contributors actually valuable at all?” Ruiz asked. “If writing the code is the easy part, why would I want someone else to write it?”
The Value of Context Over Code
Ruiz illustrated this with his own experience contributing arrowheads to Excalidraw years ago. The maintainers initially closed his PR, pointing him toward their issues-first policy. But he stuck with it because he cared—he wanted those little dots on his arrows. What followed wasn’t primarily a coding exercise but a design discussion: How do users pick arrowheads? Which components need adaptation? Do we need new icons?
“Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial,” Ruiz wrote. “Who wants to push the button?”
Eternal Sloptember
The AI-generated PRs flooding tldraw weren’t obviously bad—that was precisely the problem. They looked good. They were formally correct. Tests passed. The team had even started landing some before noticing the patterns: authors ignoring the PR template, large PRs abandoned because authors neglected to sign the CLA, commits spaced with suspiciously brief gaps, and authors with dozens of PRs across dozens of repositories.
But the twist in Ruiz’s story was unexpected: his own AI scripts were part of the problem. He’d created a Claude Code command to turn quick notes like “fix bug in sidebar” into well-formed issues. When it worked, the issues were ready to be solved. When it didn’t—when his input was too vague or Claude drew an unlucky seed—the AI would head off in the wrong direction, producing issues with imagined bugs or junk solutions.
“My poor Claude had produced a nonsense issue causing the contributor’s poor Claude to produce a nonsense solution,” Ruiz admitted. His low-effort issues were providing value as capture mechanisms. The contributor’s low-effort solutions were not. The missing piece was the human judgment to read the issue and decide whether it made sense.
The Devaluation of Someone Else’s Code
Ruiz’s conclusion cuts to the heart of the matter: “The bigger threat to GitHub’s model comes from the rapid devaluation of someone else’s code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.”
His prescription: limit community contribution to where it still matters—”reporting, discussion, perspective, and care. Don’t worry about the code, I can push the button myself.”
Read Steve’s post on GitHub | Read the full blog post
The Maintenance Shift: Jarred Sumner’s Prediction
While tldraw closed the door on external contributors and Tailwind scrambled for new revenue models, Jarred Sumner was experimenting with something radically different. Shortly after Bun’s acquisition by Anthropic, the creator of one of JavaScript’s fastest runtimes started using Claude to tackle Bun’s GitHub backlog. His conclusion was both bold and divisive: “I think open source repos almost entirely maintained by LLMs will be a thing this year.”
The simplicity of his follow-up tweet captured the entire paradigm shift in four words: “github issues are prompts.”
This wasn’t theoretical musing. Sumner shared a video showing Claude-generated pull requests flooding Bun’s repository—fixing timers, patching memory leaks, adding OpenSync
But not everyone shares Sumner’s optimism.
When I raised Sumner’s post on a recently recorded podcast with, Glauber Costa, CEO of Turso, offered a more nuanced take. “The interesting debate is the semantic debate on the meaning of words,” he explained. “What do you mean by almost entirely?” Rather than rejecting or accepting the claim wholesale, Glauber examined where AI excels and where it fails in actual production use.
At Turso—a systems-level database company where code quality is paramount—Glauber’s team is already using AI strategically. They opened almost 30 issues in their unwrap experiment, though some were closed after human review revealed necessary assertions. His key insight: “Things that you can easily verify are things that LLMs are going to become really good at.” Bug fixes with clear contracts work perfectly for AI assistance. Implementing straightforward features like the RETURNING clause for DELETE statements in their SQLite compatibility layer—”not a lot of creativity, not a lot of taste” required—is ideal LLM territory.
The critical distinction Glauber draws is between tasks that require verification versus those requiring taste. Bug fixes are ideal for AI because “you already made the decision that the feature is in, you already made the decision that the feature has to have a specific shape.” But API design and feature decisions? Those still need humans.
The Psychology of Disposable Code
Perhaps the most underappreciated shift is psychological. “You throw away things with a lot more ease than you did before,” Glauber observes. “You don’t have emotional attachment to code” generated by AI. This changes fundamental software economics. Product managers want features in a day, not two weeks. Engineers traditionally defended against quick prototypes because “if you do it in a day and that works, you’re setting yourself for a lifetime of failures.” But AI-generated code is easy to discard, eliminating the emotional barrier to experimentation. “Because it was so easy to generate, throwing away is easy.”
“We’re changing the cost of variables that we do not even know were variables up until perhaps six months ago,” Glauber argues. “Things that we thought were perhaps constant and given and part of the structure of the world of software—and now you saw that it was just a cost function that had a very high cost.” This reframes the entire debate. AI isn’t replacing human developers—it’s revealing that many constraints we accepted as fundamental laws of software were actually just expensive operations. Lower the cost, and the entire equation changes.
The Human Element
Yet Glauber’s optimism has limits. When asked about Sumner’s claim, John McBride on my podcast back hard: “I think Jared is wildly out of touch, unfortunately.” His reasoning? “The things that are really hard in open source usually are not the technical bits. It’s usually managing expectations from all the people, all the contributors, all the people who use your project.”
McBride’s concern centers on the deeply personal nature of open source maintenance—handling difficult conversations, managing community expectations, navigating front-page Hacker News controversies. “I don’t know of an AI system that can take that off of my shoulders,” he admits.
This tension between Glauber’s technical optimism and McBride’s social realism defines the current moment. AI can file issues, fix bugs with clear contracts, and handle verifiable tasks. But can it manage the emotional labor of community stewardship? As Glauber himself notes: “I would love to just have an AI system that goes read Hacker News on my behalf.”
What’s Next for Open Source?
The Year Ahead
The convergence of these three stories—Tailwind’s revenue collapse, tldraw’s contributor lockdown, and the debate over AI maintenance—reveals open source at an inflection point. The traditional bargains that sustained the ecosystem are breaking down simultaneously: the documentation-to-revenue pipeline, the volunteer contribution model, and the assumption that maintainer time is the primary constraint.
What emerges isn’t a simple narrative of AI destroying open source, but rather a fundamental repricing of what was valuable and what was simply expensive. Glauber Costa’s insight cuts deepest here: we’re discovering that many constraints we accepted as immutable laws were actually just high-cost operations. When AI slashes those costs, the entire system must recalibrate.
The survivors will likely be projects that can answer three questions: How do you monetize when LLMs bypass your funnel? How do you filter signal from AI-generated noise without closing the doors entirely? And perhaps most critically, how do you automate the verifiable while preserving the irreplaceable human judgment that McBride identifies—the taste, the empathy, the community stewardship that no model can replicate?
2026 isn’t the year open source dies. It’s the year it discovers which parts were always human, and which parts were just waiting for the cost to drop.
---
Want to hear the full debate? Glauber’s perspective is a preview from the upcoming episode 32 of Open Source Ready, where we dig into the messy reality of building and maintaining open source projects. Subscribe so you don’t miss it.








