Start with your kid.

Not a statistic. Your actual kid.

Think about the last time you tried to have dinner with them and they were somewhere else — eyes down, thumb moving, face doing that particular thing where you can tell they are neither happy nor able to stop. Think about the pediatrician appointments that now include questions about screen time and anxiety. Think about the sleep. Think about what they are like at fourteen compared to what you were like at fourteen, and be honest with yourself about the difference.

This is not a coincidence. This is an engineering achievement.

A small number of people — a very small number, with a very specific power structure — decided that the way to build the most valuable company in human history was to make the product as difficult to put down as possible. Not useful. Not joyful. Not enriching. Difficult to put down. There is a difference, and they knew the difference, and they built toward the more profitable one.

Mark Zuckerberg holds 61% of Meta's voting power. One person. He watched TikTok's algorithm generate unprecedented engagement numbers — by serving an endless, frictionless, optimized stream of content designed not to delight you but to keep you unable to stop. He replicated it on Instagram and Facebook. He did this with full knowledge of the internal research showing what it was doing to teenagers. The documents exist. The researchers who wrote them still exist. The research showing that the product makes adolescent girls feel worse about themselves, showing the correlations with depression and anxiety and disordered eating — that research was done inside Meta, by Meta's own employees, and then set aside.

There is no Instagram constitution. There is no process by which the users of these products — or their parents, or their elected representatives, or anyone at all — can hold that 61% accountable. The voting structure was designed specifically to prevent it. The terms of service were written by lawyers whose job was to ensure it.

One man. One algorithm. Four billion people.

That is not a technology company. That is a power structure with a phone app on top.


Now stay with it for a second, because there is a second wound and it is just as deep, and most people have not yet named it clearly.

For thirty years, the world's developers, engineers, researchers, and curious people built something extraordinary in public. They answered each other's questions on Stack Overflow — millions of detailed, careful answers to hard technical problems, given freely, building a searchable record of human technical knowledge unlike anything that had ever existed. They pushed code to GitHub. They wrote documentation and tutorials and blog posts. They argued in forums and mailing lists and comment threads. They built open-source tools and gave them away. They created the Linux kernel. The Python ecosystem. React. Postgres. TensorFlow.

They did this for the commons. For each other. For the students who would come later. For the civilization-level project of making knowledge free and shareable. The ethos was explicit: this is ours, together. We build it together and it belongs to everyone.

Buried in terms of service that nobody read — that nobody could have understood the implications of, because the implications did not yet exist — was a clause that said the platforms could use this content for "improving their services."

Then AI arrived.

And "improving their services" turned out to mean: training models on everything you ever wrote, everything you ever contributed, every problem you ever solved and shared. Training models that can now do what you do. Training models that are being sold to your employer as a reason to hire fewer people like you. Training models that are generating billions of dollars in revenue and tens of billions in investment for a small number of companies you have no relationship with, no ownership in, and no recourse against.

You built the training data. You did not consent to it becoming someone else's private property. You did not imagine that the digital commons you were contributing to would be enclosed — that the intellectual labor of an entire generation of people who believed in open knowledge would be captured and monetized by labs that now raise more in a single funding round than most countries spend on public education.

OpenAI's last round: $40 billion. Anthropic: $10 billion. xAI: $12 billion. In a single year, AI labs raised more capital than the GDP of many sovereign nations. And at the foundation of all of it, uncompensated, unacknowledged, and mostly unaware, are the millions of people who wrote the code and answered the questions and built the commons that made it possible.

We are not saying this is theft in a legal sense. We are saying it is an arrangement that should not be allowed to stand as the permanent structure of the AI age. We are saying the people who created the value should not be permanently separated from the benefits it produces.


Here is what neither wound has yet found: a practical answer.

Because you cannot fix either of these things with outrage alone. You cannot fix them with regulation alone, though regulation matters. You cannot fix them by deleting your apps, though your mental health may benefit. You cannot fix them by waiting for the companies that built these systems to fix them, because the systems are working exactly as intended.

You fix them by building something different, with different rules, before the window closes.

Here is the mathematics of that possibility.

It costs less than one dollar per user per year to run a social network at scale. Not what Meta spends — what it costs. Strip away the advertising surveillance infrastructure, the lobbying operations, the metaverse sidequest, the tens of thousands of employees who exist to grow the extraction machine rather than serve users — and you are left with a surprisingly small, efficient technical operation. Infrastructure at genuine scale, built honestly, is cheap. Modern tooling means a team of fifty people can maintain what once required thousands.

Meta collects fifty dollars per user per year globally. Two hundred and seventy dollars from each American.

The gap between one dollar and fifty dollars is not the cost of running a platform. It is the cost of the extraction. It is what leaves your community and becomes Zuckerberg's voting shares, Andreessen's returns, and the endless budget for the next feature designed to keep your teenager's thumb moving.

An Our One product eliminates that gap. Not by being poorer or smaller or less capable. By being structured differently. User-owned. Constitution-first. With explicit, binding prohibitions against the specific mechanisms that make the existing platforms harmful — no engagement optimization that trades user wellbeing for time-on-screen, no algorithmic manipulation in service of advertisers, no governance by permanent private vote.

The platform does not get to decide that your kid's attention is the product. The constitution says so.


Now for the AI question, because it is the most important question, and it is still open.

The labs that built on the public commons are not going away. They have the capital, the compute, and the talent, and they are moving fast. Competing with them at the frontier — building the next GPT-level model from scratch — is not the leverage point. A hundred million people cannot outspend OpenAI on GPU clusters.

But a hundred million people can do something no amount of money can buy.

They can provide real expertise.

The quality of an AI model is not determined only by the scale of its training data. It is determined critically by the quality of human feedback during training — by the people who rate outputs, correct errors, demonstrate what good looks like, teach the model what matters in the real world. This process, called reinforcement learning from human feedback, is currently done largely by outsourced workers in the Philippines and Kenya, paid a few dollars an hour to label data for models they will never benefit from.

What if it were done by actual professionals? By the engineers, doctors, lawyers, teachers, designers, scientists, and craftspeople who have real expertise in their domains? By the same people who built the public knowledge commons in the first place?

An Our One AI is not a fantasy. The open-weight models exist today — Llama, Mistral, and others — that can be taken, fine-tuned, and improved by a community with real domain knowledge. The gap between those models and proprietary frontier models is narrowing fast. DeepSeek trained a competitive model for a few million dollars. The trajectory is clear: serious AI capability is becoming accessible to organizations that do not have trillion-dollar valuations.

What changes when the people who train the model own the model?

When a million professionals contribute their expertise not to enrich a San Francisco lab but to improve a collectively owned AI that serves their community — when the benefits of that AI flow back to the people whose knowledge made it possible — the structure of who benefits from AI begins to change.

This is not a five-year project. The foundations are available now. The open-source ecosystem has already done the hardest work. What is missing is the governance layer — the constitutional framework that ensures the community actually owns what it builds, that the model cannot be quietly acquired or commercialized against the interests of the people who made it, that the benefits are genuinely distributed.

That is exactly what Our One is built to provide.


We are not asking you to believe we can fix everything.

We are asking you to consider what is actually available right now, in 2026, that was not available five years ago.

Building is nearly free. Infrastructure is nearly free. Open AI models exist that can be improved by community expertise. The tools to build constitutional governance into products from the start exist. When code gets cheap, governance becomes the product. The economic model of transparent community funding — no advertising, no surveillance, no extraction — is mathematically viable at any serious scale.

What was missing, for thirty years, was the combination: cheap enough to build, good enough tools, clear enough governance model, and enough people who understand what is at stake.

We are at that combination now.

The window is open. The labs are raising rounds and closing it. The regulatory debates are happening without a clear alternative to point to. The parents watching their kids doom scroll have nowhere to go. The developers watching their contributions train models that replace them have no way to reclaim what was taken.

We are building the place to go.

Not a protest. Not a manifesto that stops at the manifesto. A series of actual products, built constitutionally, owned by their users, protected from capture, and — starting now — building toward an AI that belongs to the millions of people whose knowledge made AI possible.

The old internet asked you to join platforms.

We are asking you to own infrastructure.

There is a difference. The difference is everything.


Our One is a framework for building digital products that cannot be turned against the people who use them. Every product begins with a constitution — a binding public document that says what the product is for, what it may never do, and who governs it and how. Users own it. Stewards maintain it. The constitution protects it.

The kids deserve software that is not designed to capture them.

The developers deserve to benefit from what they built.

The billions of people who created the value of the internet deserve something for it other than the privilege of continuing to be extracted from.

We are building it now.

Come and own it.