I cancelled my OpenAI subscription last month.
No drama, no manifesto. I’d been running both ChatGPT and Claude side by side for months, and Claude was simply giving me better results: more accurate code, more thoughtful analysis, fewer hallucinations. As a software engineer and enterprise architect, the quality difference had become hard to ignore. So I made the switch. Paid for Claude Max. Deleted my OpenAI billing. Moved on.
Then last Friday happened, and I went from feeling like I’d made a smart product decision to feeling like I’d accidentally picked the right side of history.
What Happened
If you haven’t been following the story, here’s the short version: the Trump administration banned Anthropic (the company that makes Claude) from all federal government use and designated it a “supply chain risk to national security.” The “Department of War,” had given Anthropic a deadline to agree to let the military use Claude for “all lawful purposes” with no restrictions, and Anthropic refused.
Their red lines were two things, and only two things:
- No mass surveillance of American citizens.
- No fully autonomous weapons systems.
That’s it. Anthropic was on board with 98-99% of military use cases. They’d been the first AI company to deploy on the Pentagon’s classified networks. Claude was used in the operation to capture Nicolás Maduro in Venezuela. This is not a company that’s squeamish about national defense.
But on those two points, they wouldn’t budge. And for that, the President called them “leftwing nut jobs” on Truth Social and ordered every federal agency to immediately stop using their technology.
An Unprecedented Punishment
The “supply chain risk” designation has never been applied to an American company. It’s a tool reserved for foreign adversaries, companies like Huawei that are suspected of being extensions of hostile governments. The Pentagon turned it against a San Francisco AI startup because that startup drew two lines in the sand.
Secretary of War Pete Hegseth went further, declaring that no contractor, supplier, or partner doing business with the U.S. military could conduct any commercial activity with Anthropic. If that interpretation holds (and Anthropic disputes that it can) the ripple effects across the defense industry and Big Tech would be enormous.
Why the Red Lines Matter
In January 2025, more than a year before any of this happened, the Vatican released Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence, or “things ancient and new,” a comprehensive document on AI and human intelligence. The Church doesn’t dismiss AI; it affirms technological progress as part of humanity’s God-given vocation. But Antiqua et Nova identified, with remarkable specificity, the same two dangers that Anthropic would later draw its red lines around: mass surveillance and autonomous weapons.
On surveillance, Antiqua et Nova (paragraph 29) reminds us that a person’s worth does not depend on skills, cognitive achievements, or individual success, but on inherent dignity. That principle is under direct threat. The government can already legally purchase data collected by private firms: purchasing history, location data, social media activity. What it couldn’t do before AI was analyze all of it, across billions of data points, in real time. The law hasn’t caught up with the capabilities of AI. This new kind of mass surveillance of U.S. citizens just wasn’t possible previously, and the law does not forbid it. But legality is not the same as liberty. Anthropic’s demand was simple: don’t use our tools to surveil the American people, even if the law hasn’t yet said you can’t. The scale changes the nature of the thing. Systems that reduce persons to patterns in a dataset are an assault on the very dignity Antiqua et Nova calls us to protect.
On autonomous weapons, the argument is even more straightforward: if these systems can’t consistently tell you how many R’s are in “strawberry,” they shouldn’t be deciding who lives and who dies on a battlefield. The basic unpredictability that every AI user encounters daily becomes a lot less amusing when the output isn’t a bad essay but a missile strike. Human soldiers operate within a chain of accountability that assumes human judgment. Fully autonomous weapons eliminate that chain entirely. Antiqua et Nova (paragraph 100) calls lethal autonomous systems a “cause for grave ethical concern,” and Pope Francis has called for their ban outright. The document’s core principle is unambiguous: no machine should ever choose to take the life of a human being.
The OpenAI Deal
Hours after the administration shut Anthropic out, OpenAI announced it had struck a deal with the Pentagon to deploy its models on classified networks.
Sam Altman publicly stated that his company shares the exact same two red lines as Anthropic: prohibitions on mass domestic surveillance and autonomous weapons. OpenAI’s own blog post about the deal says as much. So the Pentagon punished one company for insisting on these restrictions, then signed a deal with another company that claims the same restrictions. The difference, it appears, is that Anthropic was willing to enforce them.
Why This Matters Beyond Tech
A Silicon Valley AI company and a 2,000-year-old Church that has been monitoring AI development for four decades arrived at the same two conclusions independently. Anthropic’s position isn’t left-wing or right-wing. It’s a straightforward assertion of human dignity, the same one the Church has been making for centuries, now applied to the most powerful technology humanity has ever built.
If you disagree with Anthropic’s position, I’d genuinely like to know: which of the two are you for? Mass surveillance of American citizens? Or weapons that fire without human involvement? Because those are the only two things they said no to.
Back to the Product
None of this changes the reason I originally switched. Claude is, in my day-to-day work as an engineer and architect, the better product. The code it generates is cleaner. The reasoning is more transparent. It hallucinates less.
But I’d be lying if I said the events of last Friday don’t matter to me. I spend money on a lot of products, and I rarely get to feel like my subscription dollars are going to a company that would turn down a $200 million government contract on principle. That’s not why I switched, but it’s why I’m staying.
The views expressed here are entirely my own and are not associated with any company or organization with which I am affiliated.
