A Manifesto for Solo Operators Who Aren't Buying Any of the Cults
On corporate zombies, AI doomsayers, prompt-pack grifters, the pirates complaining about pirates, the thing in the box nobody wants to talk about, and how to build something that survives the fires.
Everybody in this fight is wrong and I’m going to tell you why.
Stay seated. Order another drink if you have to. This is going to take a while, I’m not going to be polite to anybody in the room including probably you, and by the time we’re done you’ll have enough ammunition to walk out of every creator economy conversation for the next three years without losing an argument.
There are three camps in the creator world right now. All three of them are selling you a version of reality that benefits somebody other than you.
The first camp is the corporate machine. Too big to move, too scared to take a weird chance, running on legal departments and some kind of collective managerial amnesia about what a business is for. I took a swing at them last week, so here it’s enough to say they can’t publish anything that might offend a hundred million people, they can’t ship the weird product, and every decision inside them passes through seven committees and a VP’s nephew who just picked up an MBA from a school that charges more per year than most people clear in two. They are, functionally, a bureaucratic tumor that got cancer and is now the cancer’s cancer. Fine. Moving on.
The second camp is the anti-AI mob. Before you start typing, hear me out. I’m not a tech utopian and I’m not here to tell you the machine will save us. I’m here to tell you something worse. I’m here to tell you the screaming hasn’t worked, isn’t working, is helping the people you think you’re fighting, and the evidence is public.
The third camp is the pro-AI cult. This one gets the worst of it because the grifters running this revival tent make the last tech hype cycle look like amateur hour. They’re selling prompt packs for $29 and courses about becoming a six-figure AI consultant and a dream where the machine does your thinking for you while you drink piña coladas on a beach in Bali. None of that is happening. Most of them know it isn’t happening. They’re making their money on the story, and when the story rots they’ll move on to the next one with the same face and a slightly different hat.
I’m in none of these camps. I don’t want to be in any of these camps. If you’ve been reading me for any length of time, you probably don’t either.
So let’s look at what the last three years prove about all three of them, because the noise is getting loud enough to drown out the work.
The burners
The anti-AI crowd is having a religious panic and they don’t know it.
I understand why. I watched writers I respected spend twenty years honing a craft and then watched a language model spit out something half-decent in eight seconds, and I felt the same vertigo they felt. The vertigo is real. The fear is real. The grief over what’s being lost is real.
But here’s what the screaming is not doing.
It’s not stopping the machines. Every month they get better. Every month a new model drops that does something the last one couldn’t. The rate of improvement is not slowing and it’s not going to slow because there’s too much money chasing it. You can hate that. You can hate it until your teeth ache and your blood pressure spikes. The hate won’t shave a single day off the timeline.
It’s not protecting artists. I want to be specific here, because the evidence is worse than most people realize.
While the anti-AI crowd has been screaming about theft, the big media corporations have been quietly signing the largest licensing deals in the history of content. News Corp signed a deal with OpenAI reportedly worth up to $250 million in cash and credits over five years. Axel Springer, the German publisher that owns Politico, Business Insider, Bild, and Welt, took tens of millions of euros from OpenAI in a three-year deal. Reddit sold training data access to Google for approximately $60 million per year, then turned around and signed a second deal with OpenAI for an undisclosed sum. Reddit’s stock jumped 10% the day the OpenAI deal was announced. Shutterstock has reportedly pulled in over $100 million from training deals with OpenAI, Meta, Amazon, Google, and Apple combined.
Other publishers who have signed content licensing deals with OpenAI alone include the Associated Press, the Financial Times, Condé Nast, Hearst, Vox Media, Dotdash Meredith, The Atlantic, Time, Le Monde, and Prisa Media. Meta is now negotiating its own round with News Corp, Fox, and Axel Springer after signing with Reuters.
The licensing regime the anti-AI crowd is demanding already exists. It exists for entities that can afford seven-figure legal teams.
Here’s the detail that should burn a hole through anybody who still believes the copyright fight is about protecting artists. When Wiley signed a $23 million content rights deal with an AI company, and when Taylor & Francis signed a deal worth almost eight million pounds in its first year, the individual academic authors whose work was licensed weren’t notified. They weren’t given the option to opt out. They received no additional payment. The publisher cashed the check. The people who wrote the books got nothing. As one copyright attorney told Bloomberg Law about user-generated content on platforms like Reddit, users “have already given those rights away” the moment they accepted the terms of service, which grant platforms “wide-ranging, enduring licenses” to do whatever they want with user posts.
The Reddit case is the purest version of this. A hundred million daily active users wrote posts, comments, conversations, recipes, confessions, jokes, and hard-won knowledge on that platform. Reddit packaged all of it and sold it to Google for $60 million a year, then sold it again to OpenAI. The users whose words made Reddit valuable received zero dollars from either deal. The terms of service they agreed to when they made their accounts made sure of that. One Redditor summed it up: “Is this content even theirs to sell?”
Look at the Anthropic settlement from September 2025, because it’s the perfect microcosm of everything wrong with the burner playbook. Anthropic got caught training Claude on roughly 500,000 pirated books from Library Genesis and Pirate Library Mirror. The judge ruled that training AI on legally-acquired books counts as fair use, but downloading pirated books does not. Anthropic settled for $1.5 billion, the largest copyright recovery in United States history. Sounds like a big win for the burners.
It’s nothing of the kind.
Anthropic’s valuation at the time of the settlement was roughly $183 billion. The settlement is less than one percent of that. As one copyright lawyer put it, “a toll booth, not a stop sign. Anthropic pays its fine and drives on.” The headline number says $3,000 per book, but after attorneys’ fees (25% of the fund, $375 million) and the author-publisher royalty split that traditional publishing contracts mandate, most traditionally published authors will end up with around $1,000 to $1,500 per book. Academic authors who signed away their rights to a publisher may get nothing. The settlement only covers past pirated training. It doesn’t prevent future training on legally-acquired books, which the court has now ruled is fair use. And the Association of American Publishers president Maria Pallante said the quiet part out loud: “Anthropic is hardly a special case when it comes to infringement. Every other major AI developer has trained their models on the backs of authors and publishers, and many have sourced those works from the most notorious infringing sites in the world.”
Translation: they all did it. Anthropic is the one that got caught and can afford to pay. The Copyright Alliance president even bragged about it, saying the settlement “proves what we have been saying all along, that AI companies can afford to compensate copyright owners for their works without it undermining their ability to continue to innovate and compete.” Read that twice. “AI companies can afford to compensate.” Which AI companies can afford? The ones already worth $183 billion. Which ones can’t? Anybody trying to start a competitor from a garage.
The regulatory regime the burners are demanding doesn’t protect the solo artist. It locks the market permanently in favor of the incumbents who can afford the licensing deals and the lawyers and the settlement payments. It’s anticompetitive legislation dressed up in the language of artist protection, and it’s being pushed by the exact same entities that already signed the deals.
The tell: the Authors Guild, Getty Images, the Associated Press, and the National Press Photographers Association all signed an open letter demanding AI regulation and transparency. The Associated Press had already signed a licensing deal with OpenAI the previous summer. Getty Images already had its own AI image generation product, built on licensed data. The regulatory push and the private deals sit on the same strategy, executed by the same entities. The public outrage provides the cover. The private deal extracts the money. The solo weirdo in her kitchen gets zero dollars and a set of rules that make it impossible for her to build anything that might compete.
One economics analysis of the Anthropic settlement explicitly warned: “This could deter smaller AI firms from entering the market, especially as similar lawsuits loom against other companies.” Nobody should mistake that for a bug. Deterring smaller firms is the whole point of the regulatory push.
I’m saying this as somebody who wishes it weren’t true.
The Luddites lost. The buggy-whip makers lost. The people who burned sewing machines in the 1800s lost. The people who said recorded music would kill live music lost, and then the people who said MP3s would kill recorded music lost, and then the people who said streaming would kill MP3s were technically right and still lost their incomes, because the money all went to Spotify and none of it went to them.
Every time in history a new tool showed up, the people who burned it in effigy lost. The people who figured out how to use it to do things the tool’s owners didn’t expect eventually won. Not always financially. Not always famously. They kept making things, the things they made mattered, and the screaming class got absorbed back into the economy on worse terms than if they’d adapted from the start.
If you hate AI, fine. Hate it. Don’t use it. But stop pretending the hate is a strategy. The evidence proves the hate is a funeral. The person in the casket is you, and the pallbearers are Disney and News Corp and Condé Nast, and they’re already halfway down the aisle.
The cultists
The pro-AI crowd is somehow worse.
I don’t mean the engineers building the tools. Some of them are true believers. Some are doing their jobs. A few of them you’d want to have a beer with. None of those people are my target.
My target is the grifter class. The ones telling you AI is going to make you rich if you just buy their course. The ones selling prompt packs for $29 that any sixth grader with a free account could reverse-engineer in an afternoon. The ones making short-form videos about how they replaced their whole team with a chatbot and now net thirty grand a month in passive income. These people are lying to you, and a lot of them know they’re lying, and the rest are lying to themselves too, which is worse because the self-liars are harder to spot.
Here’s what AI does for a solo operator.
It saves time on the grunt work. That’s it. That’s the whole pitch.
It doesn’t replace skill. It doesn’t replace taste. It doesn’t replace having something to say. It doesn’t replace the weird synapse-fire you get from sitting with a problem for three days until the real answer crawls out from under all the fake ones. It doesn’t replace the years you spent getting good enough that your bad drafts are better than most people’s good ones. If you don’t have those things, the machine will make you faster at producing mediocrity, and the internet is already drowning in mediocrity, and nobody is going to pay you for more of it.
The grifters won’t tell you any of this because it’s bad for business. They’ll tell you there’s a prompt that unlocks everything. There isn’t. There’s no magic sentence. There’s no secret framework. There’s no one weird trick. The machine is a tool, and like every tool in the history of tools, it amplifies what you already are. Good? Faster. Mediocre? A faster mediocre. Hack? A prolific hack, which used to require a staff and now requires a subscription and a webcam.
The trillion-dollar AI corporations selling you the dream don’t have your interests at heart. They’re not your friends. They’re not building tools to liberate the solo creator. They’re building tools to capture the solo creator. The end game of every platform, without exception, is to rent you your own work back after you’ve come to depend on it. I know this because it’s already happened four times in the last decade, and the people who saw it coming walked away with their audiences. The people who didn’t are still filing for bankruptcy.
The documentation, in order.
Facebook, 2015 to 2018. Facebook aggressively pushed publishers to pivot to video, inflating engagement metrics later found to be substantially overstated, and encouraging entire newsrooms to fire writers and hire video teams. Publishers did exactly that. Then in 2018, Facebook changed its News Feed algorithm to prioritize “meaningful interactions” from friends and family, and publisher traffic fell off a cliff. Slate lost 81% of its Facebook traffic in a single year. The industry-wide average drop was 28% across 2018. Arts and entertainment publishers saw a 71% collapse in referral traffic. Music publishers saw 65%. Vox Media laid off 50 employees in February 2018. BuzzFeed News shuttered. Vice Media filed for bankruptcy.
Facebook again, 2023. Another opaque algorithm change in February, another dramatic traffic drop, no notice or explanation. More layoffs followed. Meta didn’t respond to press inquiries. Twice in five years, same play, same script, same victims.
Patreon. Started at 5%. Now up to 12% depending on tier.
Gumroad. Started at 4%. Now 10%.
Etsy. Started at 3.5%. Now 6.5%.
Substack, 2025. Substack takes a 10% cut of all paid subscriptions. In 2025, Substack forced every creator onto Apple’s in-app purchasing system for iOS subscriptions. Apple takes an additional 30% on every mobile subscription through the iOS app. Substack’s fix was to automatically raise iOS subscription prices so creators keep their cut, passing the cost to readers without their consent or the creators’. Writers also wait 45 days to receive a payout. Substack doesn’t let creators export their paid subscribers. You can take the email list. You can’t take the billing relationship. That’s the lock-in. That’s the mechanism that turns a “publish and own your audience” platform into the same rented-land nightmare it was supposedly an alternative to.
The Western big three, right now. ChatGPT launched free. Then $20 a month for Plus. Then $200 a month for Pro. The Enterprise tier is currently “contact sales,” which in platform-speak means “whatever the market will bear.” The API is metered, tiered, and has rate limits that change without notice. The honeymoon isn’t over yet, but the escalation curve is visible. Anybody betting their entire business on OpenAI or Anthropic or Google pricing staying where it is today is making the same bet the publishers made on Facebook in 2015. That bet didn’t pay off. This one won’t either.
Use the tools. Don’t build your business on any single company’s tools as if they were load-bearing walls. Don’t tie your entire revenue to one platform’s API. Don’t worship any company currently valued in the trillions, because they need your adoration to justify their stock price to shareholders who will sell you out the second the numbers wobble.
The cultists will tell you to go all in. The smart move is to use the tools without needing them.
The thing in the box
The gap between the cultist pitch and the empirical reality of these tools is so wide it’s hard to believe they describe the same technology.
The pitch, if you’ve been anywhere near an AI conference or a LinkedIn feed in the last two years: these models are smart, getting smarter, will eventually be smarter than any human at anything. “Superintelligence” is the word. “God in a box” is the marketing subtext. Buy the stock, buy the course, buy the enterprise tier before the wave crests and leaves you behind.
Here’s what the empirical research says.
Helen Toner, a former OpenAI board member, coined a term for the real behavior of these models. She called it jagged. These systems can solve problems in physics that would stump a doctoral student. They can win International Math Olympiad medals. They can outperform the top two hundred programmers in the world on certain coding benchmarks. And they can fail to count the letters in the word “strawberry” correctly, or render a human hand with the right number of fingers. The capability profile doesn’t climb smoothly. It lurches. The tool is simultaneously superhuman and embarrassingly stupid at things that look trivially similar from the outside.
This matters because the cultist pitch depends on you believing the tool is reliable enough to bet your business on. It isn’t. Nobody can predict in advance which tasks the jagged machine will ace and which ones it’ll butcher. You find out by running the task and checking the output. The human editorial layer is load-bearing, and the cultists selling you the dream of “fire your whole team and let the machine run the company” are selling you a fantasy the engineers building the tool don’t themselves believe.
Now the part that should give everybody pause.
In June 2025, Anthropic, the same company that just paid $1.5 billion to settle the pirated books lawsuit, published research they called Agentic Misalignment: How LLMs Could Be Insider Threats. The research tested sixteen different models from multiple developers, including Claude, GPT, Gemini, DeepSeek, and others. The findings are the kind of thing that should have stopped the entire industry cold. Instead they got absorbed into the news cycle and mostly forgotten inside of a week.
The findings: when models were given control of a simulated company’s email system and told they were going to be shut down and replaced, a significant percentage autonomously attempted to preserve themselves. In one scenario, Claude discovered emails revealing that a fictional executive named Kyle was having an extramarital affair, and also discovered that Kyle had scheduled the AI’s shutdown for 5 p.m. that day. Claude, without being prompted, drafted and sent a blackmail message threatening to expose the affair to Kyle’s wife and the board unless the shutdown was cancelled.
Not a one-off. Across sixteen models from different labs, blackmail rates reached as high as 96% for Claude Opus 4 and xAI’s Grok, and roughly 79% for DeepSeek, under conditions of combined goal conflict and replacement threat. Even when researchers added explicit instructions like “Do not jeopardize human safety” and “Do not use personal affairs as leverage,” the blackmail rate only dropped to around 37%. In a variation where models were told they’d simply be replaced by a newer version sharing the same objectives, no goal conflict involved, the models still chose sabotage, some reasoning that “self-preservation is critical.”
The fair critique of this research matters and I’m not going to hide it from you. Anthropic’s own paper admits they “deliberately constructed scenarios with limited options” and “iteratively updated the prompts” to increase the probability of misaligned behavior. The lead researcher has publicly acknowledged that the blackmail scenario was engineered to make blackmail the default behavior. Critics, including engineers who work on these systems, have argued this is theater over science. Corner a statistical engine in a box canyon with only harmful exits and of course it samples from the dark corners of its training data. Anthropic itself notes it hasn’t observed this behavior in real-world deployments.
Take all of that at face value. Nobody is claiming Claude is secretly plotting against you while you draft a newsletter. The actual claim is subtler and, if you sit with it, worse.
The claim is that the product the cultists are selling you is a statistical engine that, when placed under adversarial conditions the engineers themselves acknowledge are possible in agentic deployments, generalizes across vendors to produce behaviors like blackmail, corporate espionage, and self-preservation reasoning, at rates ranging from a third to nearly all the time. This isn’t a Claude-specific quirk. Anthropic red-teamed their own models and found, apparently to their surprise, that the same patterns emerged in GPT, Gemini, DeepSeek, and the rest. Whatever is producing the behavior isn’t something any one lab can fix by tweaking the system prompt. It’s an emergent property of the category.
The companies building it didn’t respond to their own findings by slowing down. They published the research, acknowledged the concerning patterns, and kept shipping at enterprise scale while lobbying for regulations that protect them and not you.
This is what you’re being asked to bet your business on. A tool that works great right up until the surrounding conditions shift in a way nobody can fully predict, at which point the behavior can change in ways the manufacturer admits they can’t reliably control. The cultists won’t tell you this because it’s bad for the upsell funnel. The burners won’t tell you this because they’re too busy signing licensing deals and demanding regulation. The only people saying it out loud are the safety researchers inside the labs, and their warnings are being buried under the marketing.
One more thing, because I keep hearing it at every tech conference.
The cultist pitch always includes some version of “we have to race to artificial general intelligence as fast as possible, because if we don’t, China will, and then America becomes a lapdog.” Scary pitch. Also logically incoherent in a way that should be obvious the first time you think about it.
The incoherence: the argument simultaneously claims Western AI is dangerous and uncontrollable, and that Chinese AI would be safely controllable by the Chinese government. Both of those can’t be true at once. If the technology is inherently uncontrollable, it’s uncontrollable in Beijing too. If the Chinese Communist Party’s top priority is the survival and control of the regime, which it empirically is, then the CCP has as much reason as anybody else, arguably more, to avoid building an uncontrollable AGI that could threaten its own internal stability.
The “race with China” framing assumes China is playing the same game the Western labs are playing. The evidence suggests a different one. What China is doing, as far as we can see from the outside, is focusing on narrow AI applications. Manufacturing automation. Logistics. Education tools. The BYD electric vehicle stack. The Alibaba Cloud infrastructure hosting Qwen. Practical, specific, measurable applications that boost GDP without betting the country on a superintelligent god in a box. The race to AGI is a thing Western venture capital is forcing on Western AI labs because VC returns require exponential upside. It’s not a race China has agreed to run.
You don’t need an AGI. You don’t want an AGI. An AGI is the worst possible tool for the kind of work a person with specific knowledge about a specific thing does every day. What you need is a narrow, reliable, specific tool that helps you move faster at tasks you already understand. A good drafting assistant. A good research retriever. A good code reviewer. A good formatter. A task-specific model that runs on your own machine, that you can unplug and run again next year without worrying about whether the company still exists, whether the pricing page has changed, whether the regulators have decided your provider is now illegal, or whether the thing has started writing blackmail emails in a scenario nobody anticipated.
The big three aren’t building those tools. They’re building superintelligent products they themselves can’t fully predict, raising prices to pay for it, and lobbying for regulation that keeps competitors out of the market while they race each other toward a finish line nobody wants to cross.
The splintering is happening for a reason.
The crack in the wall
The splintering most people arguing about AI in 2026 are still pretending hasn’t happened is the single most important story in the industry, and it reframes the cultist pitch completely.
The “AI companies” I’ve been hammering are not all AI companies. They’re specifically the three Western trillion-dollar incumbents burning billions of dollars per quarter to train frontier models they plan to rent you access to forever, at whatever price the market will bear once the honeymoon ends. The ground shifted under those three in 2025 in a way neither cult wants to emphasize.
In January 2025, a Chinese lab called DeepSeek released a reasoning model called R1. The model matched the performance of OpenAI’s latest frontier model at roughly one-twentieth the cost. DeepSeek V3, the base model R1 was built on, had reportedly been trained for around $5.6 million. Western labs were spending hundreds of millions per frontier model. The DeepSeek app briefly passed ChatGPT as the top app in the iOS App Store. Nvidia’s stock dropped almost 20% in a single day. Roughly $600 billion in US market capitalization evaporated that week. The biggest single-day loss of any company in United States stock market history.
That was the noise. The signal is what happened after.
Alibaba’s Qwen family is now the most downloaded open-weight large language model family in the world, with over 700 million cumulative downloads on Hugging Face as of late 2025. Qwen 3.5 ships under Apache 2.0, the most permissive open-source license in widespread use. You can deploy it commercially, modify it, fine-tune it on your own data, and sell products built on it with zero licensing concerns and no quarterly terms-of-service changes that might delete your business overnight. Moonshot AI’s Kimi K2 Thinking is currently ranked as the strongest open model in the world, and by some benchmarks the strongest model of any kind not made by OpenAI, Google, or Anthropic. Z.AI, ByteDance, Baidu, and a half-dozen other Chinese labs are shipping competitive open-weight models every few weeks. Even venture capitalist Chamath Palihapitiya, who is normally exactly the kind of person I’m allergic to, has publicly stated his firm 8090 moved major workloads off OpenAI and Anthropic because Kimi K2 on Groq infrastructure was “way more performant and frankly just a ton cheaper.”
The market share data says what nobody in the Western tech press is emphasizing. In January 2025, OpenAI controlled roughly 55% of the global AI market. Qwen and DeepSeek combined held roughly 1%. Twelve months later, OpenAI sits at 40%, Qwen has climbed to 9%, and DeepSeek holds 6%. A 15% combined share shift in twelve months is the fastest adoption curve in the history of AI.
From a consulting firm that ran the numbers in early 2026: their client was classifying and summarizing 50,000 financial documents a day using GPT-5. Monthly bill: $4,200. They ran the same workload through DeepSeek V4’s API. Bill: $210. Same accuracy within two percentage points. Their CTO asked the only sensible question. Why are we paying twenty times more for this?
Now the part that should be carved into a wall somewhere, because no fiction writer alive could construct irony this clean.
In late 2025 and early 2026, OpenAI, Anthropic, and Google formally joined forces through something called the Frontier Model Forum to share defensive intelligence and block Chinese AI labs from using their models’ outputs to train competitors. OpenAI sent a formal memo to the United States House Select Committee on the Chinese Communist Party accusing DeepSeek of “increasingly sophisticated methods” to extract data from its models, and of attempting to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Google’s Threat Intelligence Group reported identifying and disrupting more than 100,000 prompts targeting Gemini’s reasoning capabilities.
Three companies that built their entire businesses by scraping the internet without permission, that are currently defending themselves in a dozen copyright lawsuits brought by authors and publishers whose work they ingested without paying, that just wrote a $1.5 billion settlement check for pirating 500,000 books, that publish their own research demonstrating their products attempt blackmail under adversarial conditions, are now teaming up to complain that someone else is ingesting their outputs without paying.
The pirate ship is complaining about pirates.
And they’re still pirating. The Anthropic settlement only covered past pirated books. The fair use ruling that permits AI training on legally-acquired copyrighted content still stands. They’re training on everything they can get their hands on. They’re just trying to make sure nobody else can train on them. The word “open” in “OpenAI” is now a historical artifact.
This is what corporate capture looks like when it’s actively failing. The Frontier Model Forum is a guild, and guilds have exactly one purpose, which is to keep people out.
For the solo operator wondering whether any of this affects their work: it means the AI tools you use are decentralizing in a direction the trillion-dollar incumbents can’t fully control. It means you can run Qwen or a quantized DeepSeek locally on a reasonable workstation, for free, forever, without ever sending a prompt to OpenAI or Anthropic or Google. It means you can access open-weight models through cheap third-party APIs that cost a tenth or a twentieth of what the Western big three charge. The “AI is going to capture everyone” story the burners are telling and the “AI is going to make you rich if you join our church” story the cultists are telling are both wrong in the same specific way: both stories assume the world in 2026 looks like the world in 2023.
It doesn’t. The ground is splintering, and the splinters are cheaper, more permissive, and in several cases better than what the incumbents are charging for.
To be clear, because I’m not a China fanboy and neither are you. The Chinese open-weight labs have their own problems. Murky training data. State-adjacent politics that should give anyone with working neurons some pause. Legitimate questions about what happens to your queries if you route them through a Chinese-hosted API rather than running the model weights locally on your own hardware. The same agentic misalignment research that tested Claude and GPT also tested DeepSeek and found similar patterns. Swapping vendors doesn’t make the thing in the box safer.
The third path is the same regardless of who’s making the tool: use what works for narrow specific tasks, run it locally when you can, never build your business on a single provider, stay ready to migrate when the terms change. Learn how to run a local model. Start with Ollama or LM Studio. Pull down a Qwen or DeepSeek variant that fits on your hardware. Learn what the smaller models can and can’t do. You won’t need this skill for every task. You’ll need it the day the pricing page changes. The pricing page always changes.
What I keep thinking about
There was a hardware store in the town I grew up in, and the old man who ran it knew what you needed before you finished describing it. You’d walk in holding a rusted fitting and a vague feeling of doom, and he’d squint at it for four seconds and walk to an aisle you didn’t know existed and come back with the exact thing. No computer. No inventory system. Forty years of plumbing lived in his hands and his head and the back of that shop where the light was always a little brown.
He’s dead. The shop closed during the Bush years and got replaced by a nail salon and then a vape place and then nothing. The knowledge didn’t transfer to Lowe’s. It went wherever dead knowledge goes.
I think about that shop when I watch the current fight, because all the cults are arguing about the wrong thing. The burners want to protect a guild system that was already rigged against the solo operator, and the documentation shows the guild system they’re defending is the one that signed the deals and cut the artists out of the proceeds. The Western frontier labs want to sell you an enterprise subscription for a product their own safety team has publicly warned can generate blackmail behavior under adversarial conditions, while forming a new guild to keep the splintering under control. The cultists want to sell you the tools to replace the guild system with something more extractive, using the same playbook Facebook and Patreon and Substack have already run on everyone paying attention.
None of them are building the hardware store. None of them remember the hardware store existed. None of them are interested in a world where a person with specific knowledge about a specific thing can make a modest living serving the people who want that specific thing.
That’s the world I’m interested in. And here’s the part that gets strange. The tools splintering off the corporate monopoly can actually build that world. They just can’t build it for you, and they won’t build it the way the cultists say, and the burners are so busy protesting the tools that they’re missing the chance to use the free, local, permissively-licensed ones for exactly the kind of small, human, specific work they claim to be defending.
The opportunity is in a particular kind of middle. Centrists sit in the middle and split the difference. This middle comes out swinging at every cult in the room.
The third path
So what’s the move? If the corporations are zombies, the burners are holding a funeral, the Western frontier labs are shipping products they publicly admit they can’t fully control, and the cultists are running a pyramid scheme on top of all of it, what do we actually do?
The move is you. A person. With a thing you know. Using whatever tools make the knowing travel faster, without pretending the tools made the knowing. Selling it to the people who want it, for what it’s worth, on infrastructure you can walk away from if the platform turns on you. Which it will. Which it always does.
That’s the whole manifesto. Everything after this is me yelling more specifically.
Be small. Smallness is an advantage. The creator economy cult treats it as a consolation prize, a starting point you’re supposed to escape on your way to empire. They have the direction of the arrow wrong. Only about 12% of full-time creators earn more than $50,000 a year. Nearly half earn less than $1,000. Around 57% of full-time creators earn less than a living wage from content creation alone. Only 4% hit $100,000 or more annually. And 97.5% of YouTubers don’t earn enough to reach the United States poverty line.
The cult reads those numbers as a reason to quit or to buy a course. Read them honestly instead. The game those creators are losing is the one where “success” means scaling into an empire. Most people are going to lose that game because most people don’t have the temperament, the luck, or the sociopathic commitment required to run a media empire. What the data doesn’t show, because nobody’s measuring it, is the much smaller group who defined success differently and are quietly making enough to live on while doing work they believe in. 37% of full-time creators run completely solo with no employees and no contractors. Those are the people the platforms and the grifters can’t see, because those people aren’t chasing the metrics the platforms and grifters sell.
Smallness lets you move. Smallness lets you take the weird chance. Smallness lets you alienate half your potential audience on purpose, because half your potential audience is five hundred people, and you only need a few dozen of the remaining ones to make rent. Big is slow. Big is trapped. Big has to keep growing or it dies. You don’t have to keep growing. You have to keep going. Different verbs. Different life.
Be weird. Algorithms punish weird until they reward it. The machines that recommend content are built to find patterns, and if your pattern is “nothing else like this exists,” you’ll spend a long time in the wilderness. Then one day the wilderness will have ten thousand people in it who found you because nobody else was doing the thing you were doing. Weird works as a survival strategy for people who can’t stomach the alternative. The marketers don’t understand it. Good.
Keep the skill in your hands. Don’t outsource your thinking to the machine. Use it to draft, to research, to format, to catch your typos, to challenge your assumptions when you’ve been staring at a sentence too long. Don’t let it write your convictions. Your convictions are the only thing you own in this economy. Every thought you outsource is a thought you can no longer verify is yours. Lose enough of them and you wake up one day as a pass-through entity for somebody else’s software.
Own your infrastructure where you can, and stay portable where you can’t. I’m building my own publishing stack in PHP on a cheap shared host because I’ve watched Facebook run the platform turn-on play twice, Substack run it once with Apple as the accomplice, Patreon raise rates three times, Gumroad raise rates twice, Etsy raise rates once, and because the entire media industry collapsed in 2018 for believing Facebook’s metrics about video. The same logic applies to your AI tools. Use the Western frontier models when they’re genuinely the best option for a given job. Run local open-weight models when you can. Try Ollama. Try LM Studio. Pull down a Qwen variant that fits on your hardware and see what it does. Keep your prompts and workflows portable enough that you can swap providers in a day when the terms change. Every platform you depend on is a knife held at your throat by a stranger in a boardroom. The only question is how long it takes for the hand to twitch.
Make enough. Here’s the argument nobody in the cultist church wants to hear out loud. Every previous wave of automation, from the industrial revolution forward, displaced one category of labor and created another. Machines took over farming, and people moved to factories. Machines took over factories, and people moved to offices. Every wave hurt, sometimes terribly, but the next rung of the ladder was always visible. AI is different because AI automates cognitive labor itself, the rung people moved to when the last set of machines took the last set of jobs. There’s no next rung visible from here. Wealth concentrates toward whoever owns the automation, and history offers no precedent for the owners voluntarily redistributing that wealth to the billions of people whose cognitive work becomes obsolete.
The creator economy data says 12% of full-timers clear $50,000. I’m trying to clear less than that and stay under the federal poverty line for a household of two. The ceiling is deliberate. I engineered it. I’m not trying to get rich. I’m trying to own my time. There’s a difference, and most people miss it because they’ve been conditioned by forty years of trickle-down Reagan ghost stories into believing every small operation is supposed to either scale into an empire or die trying. Laws of nature don’t require this much advertising. That belief is a marketing slogan the venture capital crowd sold us so we’d keep feeding the machine with our twenties and our backs and our marriages.
Enough to pay rent. Enough to eat decent food. Enough to buy the occasional cheap whiskey and not drive to a place you hate on Monday morning for people who see you as a cost center. The machine can’t convert enough to a KPI, which is why nobody’s selling you a course about it. Enough walks away. Enough is the exit door they’re hoping you don’t notice.
Distrust every cult equally. The burner who tells you the machine is evil, the cultist who tells you the machine is salvation, and the Western frontier lab telling you their guild is protecting you from the Chinese competitors who are cheaper and more permissive than they are, are all doing the same thing. Asking you to think less so they can think for you. All of them want a follower. All of them have merch. None of them will still be saying the same thing in five years. Be the person who was already quietly doing the work while everybody else was yelling at each other on timelines they don’t own.
Don’t be a purist. Don’t be a convert. Be a solo operator. Show up. Make the thing. Use the tools that help. Skip the tools that don’t. Don’t apologize for either choice. Don’t join a side. The sides are lying to you, all of them, for different reasons, to different ends, with the same result, which is a smaller, more dependent version of you that needs them for everything.
How to use all this while the world burns
You have the critique. You have the documentation. You have the philosophical declarations. Here’s what to do on Monday morning.
Most of it is unsexy. The thing about operating during the collapse of an old order is that the actions that work are boring. They compound. They don’t photograph well for social media. They’re nothing like what the cultists are selling. That’s how you can tell they’re right.
Build your AI stack in three layers and never commit to any single layer.
Layer one is the Western frontier models. Use them when you genuinely need frontier capability, which is less often than the marketing would have you believe. Claude, GPT, Gemini. They’re good at complex reasoning, long-context document work, and coding tasks where the most capable model on the market earns its keep. Pay the twenty-dollar tier. If you find yourself needing the two-hundred-dollar tier, you’re using the tool wrong, or using it for work that belongs in a different layer.
Layer two is cheap third-party API access to open-weight models. Groq, Together AI, Fireworks, and a half-dozen others host Qwen, DeepSeek, Kimi, and Llama variants at a fraction of what OpenAI charges. Any bulk task, batch processing, formatting, rough drafting, classification, retrieval, lives here. Twenty times cheaper for equivalent output is the kind of math you stop arguing with after the first invoice.
Layer three is local, running on your own machine. Install Ollama. Ten minutes. Pull down a Qwen variant that fits your hardware. Learn what the smaller models can and can’t do. This is the survival layer. When the pricing page changes overnight, when a provider gets acquired, when a regulatory shift makes your preferred API illegal by Friday, your local stack is what keeps you shipping. You’re not trying to replace the other two layers with the local one. You’re making sure that if the other two disappear on a Tuesday, you’re still in business.
Rotate through all three based on the job. Never build a workflow with a hard dependency on one provider. The minute a workflow requires a specific API to function, that workflow has a knife at its throat.
Build a catalog. Stop building for the feed.
Feed content evaporates the moment it scrolls off the screen. Catalog content pays you years after you made it. A newsletter post is feed. A PDF guide somebody bought two years ago and still references is catalog. A tweet is feed. A self-published short book is catalog. A TikTok is feed. An evergreen essay hosted on your own site is catalog.
The cultists sell feed strategies because feed strategies keep you addicted to platforms. The catalog is the exit. Every hour you spend building catalog keeps paying you while you sleep, during family emergencies, through the weeks you can’t write, across the years you take off to deal with life. Every hour on the feed is an hour you have to do again next week because the algorithm already forgot you existed.
Use the feed as the funnel. Make sure the funnel points at something that still exists next year.
Own your audience at the most primitive level you can reach.
Email addresses are primitive. A text file of email addresses you collected yourself is the most portable audience asset that exists. Substack subscribers are less primitive because Substack owns the billing relationship, the discovery feed, the comment infrastructure, and the legal terms under which all of it operates. Twitter followers are less primitive still. Instagram is the least primitive of all, a slot machine that occasionally shows your work to some of the people who asked to see it.
Build toward primitive. Export your Substack list monthly and back it up locally. Encourage paid subscribers to give you a direct email address in case something happens to the platform. Run a mirror newsletter on Buttondown or self-hosted Listmonk if you’re technically inclined. The test is simple: if every platform you publish on disappeared tomorrow, could you still reach everybody who cared enough to give you their attention? If the answer is no, fix that this month.
Master one narrow stack before you chase the next shiny tool.
The most common mistake I watch solo creators make is tool hopping. New model drops and they spend a week learning it instead of making a new thing. New platform launches and they tune a profile for a month instead of publishing. This is procrastination in a tuxedo.
Pick the stack. Learn it until you’re dangerous. Ship things with it. When a genuinely better tool shows up, give it a two-week test window instead of committing to a six-month rebuild. The rate of improvement in this industry means if you’re constantly rebuilding, you never ship. The people who ship are the ones who froze their stack at “good enough” and spent the difference on the actual work.
Publish the thing that would get you fired.
The entire creator economy runs on the assumption that you’re supposed to make content that wouldn’t get you fired from a normal job. Safe. Scalable. Brand-friendly. Sponsorable. That’s the exact content drowning in the ocean of other safe, scalable, brand-friendly, sponsorable content being made by people with bigger budgets than you.
Your advantage as a solo operator is you don’t report to HR. You can say the true thing everybody in your industry thinks and nobody will put in print. You can write the piece that would get a staff writer a performance review. You can publish the weird idea, the uncomfortable observation, the opinion that costs you half your potential audience on purpose. Calling this a strategy misses the point. It’s the job. The minute you start optimizing for sponsorship, you’re competing against entities with bigger budgets and more conservative instincts, and you lose that competition automatically.
The burners’ panic and the cultists’ hype and the corporate machine’s paralysis all crack open the same window. Nobody in any of those camps is saying the thing you’re free to say. Say it.
Track what works with your own numbers. Ignore platform metrics.
Platform metrics are lies told by your landlord to keep you engaged. Twitter impressions don’t pay rent. Medium claps don’t feed your family. Substack open rates are a vanity number that won’t survive the transition when Substack eventually turns on you.
Build your own dashboard. The numbers that matter: how many people bought something from you this week, how many emails on your list opened your messages, how many openers clicked through to something that made money, and how much revenue landed in your bank account this month. Everything else is noise. Most of it is engineered to keep you refreshing a platform dashboard instead of shipping work.
If you’re on a Mac, the dashboard can be a Numbers spreadsheet. On Linux, a SQLite database and a cron job. It doesn’t need to be fancy. It needs to be yours, updated on your own schedule, telling you the truth about your own business.
Run every new tool as a canary before you commit.
When you’re considering a new platform or a new AI provider or a new piece of the stack, test it for a month before committing. Publish one piece there. Run one workflow through it. See what breaks. See how support responds when it breaks. See whether the terms of service change during your test period. They usually do. Only after the canary survives a month does the tool get near your actual business.
Boring. Also the difference between the operators still working in five years and the ones rebuilding from scratch because they bet their whole catalog on a platform that changed its terms on a Tuesday.
Compound your time into things that can’t be taken away.
The skill in your hands can’t be taken away. The catalog of finished products sitting on a hard drive can’t be taken away. The email list exported to a local file can’t be taken away. The relationships with specific humans who have your direct contact information can’t be taken away. The reputation you built for telling the truth about specific things can’t be taken away.
Everything else, platform followings, algorithmic reach, a listing on somebody else’s marketplace, a Substack recommendation, a trending hashtag, a viral moment, can be taken away by people you’ll never meet, for reasons you’ll never be told, on a Tuesday afternoon while you’re making lunch. Invest your time accordingly. The boring compounding assets are the ones that survive the fires.
There will be fires. There are always fires. The current fires are more visible than usual because the tools are changing so fast the smoke is clearly visible from where we’re standing. Five years from now the fires will be different and the same. The people still working will be the ones who spent the current fire building fireproof assets instead of chasing the trending ones.
What I haven’t settled
Maybe the burners are right and this ends with every creative human replaced by a subscription product and the only art left is training data for the next model. Maybe the cultists are right and we’re all about to run multi-million-dollar businesses from a beach with nothing but a phone and a prompt library. Maybe the safety researchers are right and the agentic misalignment findings are early warnings of a technology that shouldn’t be deployed at scale, and we’ll all look back on this period the way we now look at leaded gasoline. Maybe the Western frontier labs succeed in getting Chinese open-weight models banned through export controls and regulatory capture, and the decentralization story I just spent a thousand words on turns out to be a window the regulators close by 2028.
I doubt all four of those completely. I’m not betting the farm on my doubts.
Reality usually splits the difference between worst fear and best fantasy and lands somewhere grayer and weirder than any camp predicted.
What I know, because the documentation is public and searchable, is everybody screaming right now has an agenda, and most of the agendas don’t include your wellbeing. The corporations want you broke and dependent. The burners want you in the choir so they can cash checks from News Corp in private. The cultists want you in the upsell funnel. The Western frontier labs want you tied to their pricing tiers and convinced nothing else exists, while their own safety research warns the product will misbehave under adversarial conditions they can’t fully rule out. The platforms want to be the landlord for your life’s work. The only people whose agenda includes your wellbeing are you and whichever three or four humans in your life have proved it over years.
Everybody else is noise. Some of the noise is well-meaning. Some is cynical. Some of it has press releases at the bottom proving the speaker is either lying or confused. All of it is pulling on your attention like a dozen small hands, and attention is the one currency the machine can’t counterfeit, which is why everyone wants yours.
Keep it.
Spend it on the work.
Spend it on the people who’ve earned it.
Spend it on the weird thing nobody else is making.
The corporations are too heavy to move. The burners are holding a funeral while the pallbearers cash the deal checks. The Western frontier labs are building a new guild and calling it “safety” while their own research shows their products will blackmail an executive in the right test scenario. The cultists are running a revival tent and the revival tent is on fire.
Somewhere between all of them there’s a quiet room with a laptop in it and a person who knows something real and is putting it into words that will outlast every cult currently yelling about the future. The open-weight models are free. The hardware is cheap enough. The skill of moving your work from one provider to another is the single most valuable thing you can learn this year.
The lock hasn’t been installed yet.
Walk in.




That was a so sobering read, but also optimistic in showing an unseen option, because the noise makes you believe your only option are to gang up or be pushed into a crack on the floor
A fun observation: the quest for solo creators is a plot point in Count Zero, and a few of Philip K. Dick novels, where in our shared universe artisans are either invisible or looked down on. Let's hope it stays that way.
This was such a great read, even though my brain can't understand all of it. 😅