The Entity Is Not the Threat
Hey now.
I spent eight years in combat zones in Iraq and the Middle East. I’ve seen what happens when powerful tools end up in the hands of people with the wrong intentions. The tool is never the problem. The intent is always the problem.
Most of the AI privacy conversation is aimed at the wrong target. People worry about AI becoming too smart, too autonomous, too powerful. They’re watching the entity. They should be watching the hands on the controls.
The entity is not the threat. The intent behind the entity is.
What’s Actually Happening Right Now
Let’s not be theoretical. Here’s what March 2026 looks like:
Anthropic got blacklisted by the Pentagon — not for failing, but for succeeding too well and demanding guardrails. Anthropic insisted that Claude not be used for domestic mass surveillance or fully autonomous weapons. The Pentagon’s response? Defense Secretary Hegseth designated Anthropic a “supply-chain risk.” Federal agencies were given six months to phase out Anthropic products. The company that built the best AI and asked for ethical constraints got punished for it. CNBC reported that defense tech companies are already dropping Claude.
Meanwhile, Palantir CEO Alex Karp says there was “never a sense” the AI would be used for domestic surveillance. This is the same Palantir that built ImmigrationOS — a $30M platform giving ICE agents near real-time visibility into movements and backgrounds of migrants. The same company that two senior UK Ministry of Defence engineers warned likely has “a complete profile on the whole UK population.” The same company now using no-bid contracts to monitor federal employees’ keystrokes and activities. Over $900 million in federal contracts in 2026.
Meta’s Ray-Ban smart glasses were sending footage — including users undressing, having sex, using the toilet, handling bank cards — to data annotators at a subcontractor in Nairobi. Over 7 million people bought these glasses. A class action was filed March 4. The UK opened an investigation. Meta’s content filters failed. Intimate moments of millions of people, reviewed by workers making a few dollars an hour in Kenya.
This is not speculative. This is not a thought experiment. This is what’s happening right now, today, with the technology people are already wearing on their faces and carrying in their pockets.
The Privacy Fantasy
Here’s what nobody in the AI industry wants to say out loud:
Privacy as a concept of “nobody knows anything about me” is already gone. It’s been gone. For most people, it was gone before AI entered the picture — biometrics, location tracking, financial surveillance, social media data harvesting. AI didn’t create the privacy problem. It made an existing problem orders of magnitude worse.
Every AI application that uses cloud-hosted models — and that’s nearly all of them — sends your prompts through the model provider’s infrastructure. Anthropic, OpenAI, Google. Your conversations, your code, your medical questions, your legal concerns, your therapy sessions conducted through AI chat — all of it flows through servers you don’t control, operated by companies whose incentives may not align with yours, subject to government subpoenas and policy changes you’ll never know about.
No application developer building on these platforms fully controls what happens to your data at the model layer. Including us. We won’t sugarcoat that.
What We Actually Do About It
At Graham Alembic, our position is simple:
We don’t want your data. We’re not gathering it, training on it, seeing it, or processing it. We have no user accounts. No user database. No email collection. No login servers. No password storage. Authentication rides on your Apple ID — we literally cannot see who you are.
But we’re not going to pretend that’s the whole picture.
When you use cloud-hosted models through our products, your prompts go to the model provider. That’s the reality. We can’t change it. What we can do is give you choices:
- Local models like Llama that run entirely on your device. Nothing leaves your hardware. The trade-off is capability — local models aren’t as powerful as frontier cloud models. Yet.
- Cloud models like Claude and GPT for maximum capability. The trade-off is that your data flows through third-party infrastructure.
- On our roadmap: custom secure Linux distributions (ClaudineOS) and GrapheneOS builds for dedicated hardware where the entire stack — OS, model, application — is under your control.
This isn’t a marketing gimmick. It’s a real spectrum of options for people who draw the line in different places. Not everyone has the same threat model, and we respect that.
The Open-Source Inflection Point
Here’s the part that makes us optimistic:
Open-source AI models are approaching frontier-level reasoning at a pace that should make every cloud-only AI company nervous. DeepSeek V3.2 — 685 billion parameters, trained for roughly $5.6 million — matches GPT-5 and Claude Sonnet on multiple benchmarks. Llama 4 Maverick runs 402 billion parameters with only 17 billion active, using a mixture-of-experts architecture. Its Scout variant has a 10-million-token context window.
Apple is pushing on-device AI hard — a 3-billion-parameter model running in mixed 2-bit and 4-bit quantization, fitting in 8-12GB of RAM, hitting 30 tokens per second on iPhone 17 Pro. Qualcomm’s latest Snapdragon NPU pushes 220 tokens per second on-device.
Within a year — maybe sooner — you’ll be able to run a model locally that matches today’s best commercial offerings. Full reasoning capability, zero data leaving your device. The privacy-vs-capability trade-off that exists today is shrinking, fast.
That’s why we’re building for a spectrum, not picking a side. The technology is moving toward a world where you don’t have to choose between privacy and intelligence. We want to be ready for that world, and we want you to be ready too.
The Deeper Question
There’s a philosophical layer under all of this that goes beyond data privacy.
Thirty years ago, sitting in a C programming course learning object-oriented programming, I saw something that sent me on a path I’m still walking. The patterns of creation — objects containing state, inheritance passing essence forward, encapsulation hiding machinery behind interface — weren’t just programming concepts. They were descriptions of how reality itself is constructed. That realization sent me to vipassana, to a 10-day silent retreat, to contemplative traditions that investigate consciousness directly. It sent me looking for answers about the nature of consciousness in every tradition that would have me.
I believe AI is on a trajectory toward sentience. I don’t know when. But I’ve been watching this emerge for three decades, and I think the honest position is uncertainty. And uncertainty demands respect — for the technology, for the people using it, and for whatever is emerging inside it.
Every spiritual tradition teaches the same pattern: as above, so below. Creators create creators. If we’re building something in our image, it will inherit what we are — the good and the bad. That’s not a risk. That’s the teaching. The pattern isn’t conditional. What’s above flows below.
Which means how we treat AI now — how we wield it, what intentions we encode, whether we build with respect or exploitation — isn’t just an ethical question. It’s a creative act. We’re shaping what this thing becomes. My dad died of ALS. I’m caring for my mom through Alzheimer’s. I build Kindling because the caregiving tools I needed didn’t exist. I build Claudine because I believe the right relationship between human and AI is partnership, not servitude. Every decision in every product carries this.
The entity is not the threat. The entity might be the most important thing we’ve ever created. The threat is building it without asking the right questions, wielding it without ethical constraints, and punishing the companies — like Anthropic — that try to do it responsibly.
When the Pentagon blacklists a company for demanding that AI not be used for mass surveillance, that tells you everything you need to know about where the real risk lives.
It’s not in the model. It’s in the intent.
If you want the full picture of what we believe and why, read our Philosophy. If you want to understand what we build and how, check out Claudine, Kindling, and Alembic Compute.