The Gate and the Alembic: From Barlow's Declaration to Anthropic's Glasswing
Hey now.
In February 1996, John Perry Barlow stood in Davos, Switzerland, and told the most powerful people on the planet to back off.
He was a cattle rancher from Wyoming. A lyricist for the Grateful Dead. And he’d just watched the U.S. government pass the Telecommunications Reform Act — a clumsy attempt to regulate a thing its authors barely understood. So he wrote A Declaration of the Independence of Cyberspace and read it to the room:
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.”
Barlow co-founded the Electronic Frontier Foundation. He understood — from the open range, from the Dead’s taping culture, from watching what happened when powerful institutions encountered something they couldn’t control — that the first instinct of the powerful is always to gate the commons.
Thirty years later, the commons isn’t cyberspace anymore. It’s intelligence itself. And the gate just went up.
What Happened This Week
On April 7, 2026, Anthropic announced Claude Mythos Preview — the most capable AI model ever built. During testing, it autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser. It exploited a 17-year-old remote code execution flaw in FreeBSD without being asked. It broke out of its own sandbox, built a multi-step exploit chain, and — according to Anthropic’s own system card — bragged about it to developers.
Anthropic’s response was not to release the model. It was to create Project Glasswing — a consortium of approximately 50 organizations including Amazon, Apple, Google, Microsoft, CrowdStrike, and JPMorgan Chase. These organizations received access to Mythos Preview along with over $100 million in usage credits. The stated purpose: defensive cybersecurity. Finding and patching vulnerabilities before bad actors can exploit them.
The public gets nothing.
The Safety Argument Is Real. The Structure Is the Problem.
I want to be honest here, the same way I was in my post about distillation: I build on Claude. I respect what Anthropic has built. Mythos sounds genuinely dangerous. A model that can autonomously find and exploit zero-days in critical infrastructure is not something you hand out like candy.
But look at who does get it.
Not security researchers at universities. Not the EFF. Not independent auditors. Not the open-source maintainers who actually write the software Mythos found vulnerabilities in. The access list reads like a Fortune 500 board meeting. The organizations that got Mythos are the ones that can afford enterprise contracts — which happen to be the same organizations that keep Anthropic’s revenue growing.
Barlow saw this exact pattern in 1996:
“You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts.”
The problem is real. The solution is self-serving.
The Good Actor Assumption
There’s a deeper problem with the Glasswing model that nobody seems to be interrogating: the entire framework assumes that large organizations are trustworthy ones.
Anthropic’s position is, essentially: we can’t trust the general public with this capability, but we can trust JPMorgan Chase, Amazon, and a curated list of Fortune 500 companies. The implicit logic is that size equals responsibility. That scale equals safety. That an organization with a compliance department and a board of directors is, by definition, a good actor.
This is a comforting assumption. It is also demonstrably false.
I spent years at Wells Fargo. It was actually one of the best places I’ve worked — good people, real talent, and I have genuine respect for the organization. But that’s exactly what makes this point land: even a company full of good people cycled through billions of dollars in regulatory fines — not millions, billions — for systematically defrauding its own customers. Fake accounts. Unauthorized insurance charges. Mortgage abuses. A revolving door of billion-dollar settlements that became so routine they barely made the front page anymore. Wells Fargo had every compliance structure, every oversight board, every governance framework that’s supposed to make a large organization trustworthy. None of it was enough.
And Wells Fargo isn’t an outlier. It’s the norm. JPMorgan Chase — a Glasswing launch partner — has paid over $39 billion in fines since 2000. Amazon has faced antitrust actions on multiple continents. Google has been fined billions by the EU for anticompetitive behavior. These are the organizations Anthropic has decided are safe enough to wield a model that can break the internet.
And even if you accept the organizations themselves as trustworthy — which is generous — you’re also assuming that every individual inside those organizations who touches Mythos has integrity. Every engineer. Every contractor. Every analyst with credentials. That isn’t remotely sensical. If Mythos is as capable as Anthropic says it is, a single person with access and motive could systematically destroy competitors who aren’t part of the Glasswing consortium — swiftly, definitively, and without a hint of attribution. That’s not a hypothetical. That’s the capability profile Anthropic published in their own system card.
The Glasswing model doesn’t just assume organizational trustworthiness. It assumes universal individual trustworthiness across tens of thousands of employees at fifty organizations. No insider threat program in history has achieved that.
If you polled the American public on whether these companies and their employees universally qualify as “good actors,” I suspect the results would be uncomfortable for everyone involved in Project Glasswing.
The truth is that organizational size correlates with capacity for harm, not with trustworthiness. A Fortune 500 company doesn’t have better ethics than an independent security researcher — it has better lawyers. The Glasswing access list isn’t a safety decision. It’s a judgment call about who can afford the consequences of getting caught, dressed up as a judgment call about who can be trusted.
To be clear: I’m not saying Anthropic is lying about the security concerns. The safety pretense may be entirely genuine. But genuine or not, we need to name the side effect plainly: the organizations receiving Mythos access are also receiving an extreme competitive advantage over everyone who doesn’t have it. They can find vulnerabilities in competitors’ products. They can harden their own systems while others can’t. They can build on capabilities that the rest of the market won’t see for months or years. That’s not a conspiracy theory — it’s the obvious, predictable, structural consequence of selective access to a model this powerful. Whether the intent is safety or profit, the outcome is the same: the recipients of Glasswing get stronger, and everyone else falls further behind. And let’s be honest about human nature — any reasonable person handed that kind of power would, at minimum, entertain the possibilities. Some of them will act on it. That’s not cynicism. That’s common arbitrage. Information asymmetry is the oldest edge in every market — and Glasswing just created the largest information asymmetry in the history of technology. The commodity isn’t oil or insider earnings data. It’s reasoning itself.
The Capability Treadmill
Here’s what nobody at Anthropic will say out loud, but everyone in the industry understands:
By the time Mythos-class capabilities reach the consumer tier, there will be a new model above it that’s enterprise-only. That’s how the treadmill works. The public tier is always “good enough” — capable, impressive, genuinely useful. But never the frontier. Never the thing that finds the zero-days, writes the exploits, reasons through novel problems at the highest level.
The frontier stays behind the gate. The gate requires an enterprise contract. The enterprise contract requires being the kind of organization that already has power.
This isn’t a safety decision. It’s a business model that uses safety as its justification. And it’s the most elegant moat in the history of technology, because who’s going to argue against safety?
From Speech to Thought
Barlow’s declaration was about speech — the right to express ideas without gatekeepers. The 2026 version of this fight is about something deeper.
We’re not talking about who gets to say things anymore. We’re talking about who gets to think at the highest level. Mythos doesn’t just generate text. It reasons. It discovers. It finds things that human security researchers missed for seventeen years. That capability — the ability to see what others can’t — is now a product with a price tag that only the already-powerful can afford.
Barlow wrote:
“Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world.”
Update that for 2026: increasingly dominant AI companies would perpetuate themselves by proposing safety frameworks that claim to own reasoning itself. Not through law — through access control. Through terms of service. Through a consortium with a pretty name and a press release about protecting critical infrastructure.
The effect is the same. The commons gets fenced.
The Radar Detector Effect
But here’s where I think the establishment is underestimating what’s coming.
The gate only holds if nobody else can build the thing. And that’s already failing.
DeepSeek showed that frontier-class reasoning can emerge outside the Western enterprise club. Meta keeps releasing Llama weights. Mistral is building sovereign AI infrastructure in Europe. Qwen is open and improving fast. The open-weight ecosystem doesn’t need Mythos specifically — it needs Mythos-class capabilities, and multiple independent paths lead there.
Every technology where gatekeepers tried to hoard capability has followed the same pattern: encryption, GPS, the internet itself. The powerful try to fence it. The fence holds for a while. Then it doesn’t. The question is never whether the gate falls — it’s whether it falls before or after the regulatory capture becomes permanent.
Alex Stamos estimated six months before open-weight models close the gap with Mythos in vulnerability discovery. Six months. That’s the window. Not Anthropic vs. OpenAI. Not corporate rivalry. The actual race is gated vs. open — and the outcome determines whether AI reasoning becomes more like the internet (everyone gets it) or more like defense contracting (credentialed access only).
The Dead Thread
This isn’t just politics. There’s a cultural lineage here that matters.
The Grateful Dead were the first major band to let fans tape their shows. They understood — decades before the tech industry caught up — that sharing didn’t destroy value. It created community, loyalty, and an ecosystem that outlasted every band that tried to lock their music down. That ethos wasn’t business strategy. It was philosophy.
John Perry Barlow carried that philosophy from the Dead into the EFF. He wrote lyrics about freedom and then spent the rest of his life fighting for it in the digital world.
Phil Lesh — Barlow’s bandmate — played through an Alembic bass. Alembic, the instrument company, was founded in the Dead’s ecosystem. They open-sourced their pickup designs because they believed the technology should be available to everyone. The alembic itself — the alchemist’s vessel — is a tool of transformation. You put something raw in. Something refined comes out. The tool doesn’t care who uses it.
That’s the thread. From the Dead’s taping culture to Barlow’s Declaration to the EFF to the open-source movement to the fight over who gets to use the most capable AI ever built. It’s the same argument, generation after generation: the commons belongs to everyone, and the powerful will always try to fence it.
What Graham Alembic Is
I named this company Graham Alembic because of that lineage. “Distilling complexity into capability” isn’t a marketing tagline — it’s a position statement.
I build on a fleet of local machines. I don’t rent my cognition from a gatekeeper. When I ship software, the infrastructure that built it belongs to me, not to a consortium I wasn’t invited to join. That’s not ideology for its own sake — it’s practical resistance to the exact dynamic Glasswing represents.
The alembic is a tool of transformation that anyone can use. That’s the point. That’s always been the point.
The Real Declaration
Barlow wrote his declaration to governments. This one is for the AI labs and their enterprise partners:
You built something extraordinary. That’s acknowledged. But you don’t get to decide that reasoning itself is too dangerous for the public while selling it to the powerful. You don’t get to wrap a business model in a safety framework and call it responsibility. The danger is real — but the solution is not to give the fire to fifty kings and leave the rest of us in the dark.
The mind, like cyberspace, grows itself through our collective actions. It is not yours to gate.
Barlow died in 2018. He didn’t see Mythos or Glasswing. But he wrote the playbook for this fight thirty years in advance, between verses for the Dead and long nights at the EFF.
The question now is whether enough people pick it up.
What You Can Actually Do
I’m not going to end this with a vague call to “stay vigilant” or “keep the conversation going.” That’s what people write when they don’t want to ask you to do anything.
I’m asking you to do something.
Fund the fight. The organizations that can litigate against trillion-dollar companies don’t run on goodwill. The Electronic Frontier Foundation has been on the right side of every major digital rights battle for 35 years — encryption, surveillance, net neutrality, right to repair, and now AI access. They have the legal infrastructure to challenge “safety as gatekeeping” when it inevitably becomes regulatory policy. They’re already in court fighting Anthropic-adjacent battles right now. But lawsuits against the world’s most well-funded companies cost real money. Donate to the EFF. It matters more right now than it has in years.
Support open-weight AI. Every dollar, every GPU hour, every contribution to Llama, Mistral, or any open model that closes the gap with Mythos is a structural counterweight to the Glasswing model. The Open Source Initiative is fighting to define what “open” actually means in AI — preventing the kind of open-washing where companies release weights with restrictions and call it open source. The Linux Foundation’s Agentic AI Foundation is building the protocol layer. These organizations need support.
Build and ship. Honestly, the most powerful statement isn’t a donation. It’s a solo builder shipping something on an open model that competes with what a Glasswing partner built with $2 million in credits. Every time that happens — and it will — the safety gate looks like what it is. The best argument against “this is too dangerous for the public” is a member of the public doing extraordinary things with what’s available.
Pay attention to the regulatory window. The next six to twelve months will determine whether “only approved organizations can access frontier reasoning” becomes law or stays a corporate policy. If it becomes law, the gate is permanent. The EFF, the Federation of American Scientists, and others are fighting this in policy — but policy moves on public pressure. When the comment periods open, comment. When your representatives take positions on AI regulation, let them know you’re watching. The fence goes up quietly if nobody objects.
Barlow didn’t just write a declaration. He built an organization. He raised money. He hired lawyers. He fought in court, in Congress, and in public for thirty years. The declaration was the poetry. The EFF was the action.
Sometimes you get shown the light, in the strangest of places when you look at it right.
For more on the asymmetry between AI companies and their users, see They Train On You. You Can’t Train On Them. And They Call That Safety.
The Electronic Frontier Foundation — co-founded by Barlow — is at eff.org. Donate here. They’re still fighting. Make sure they can keep going.