OpenAI published “Industrial Policy for the Intelligence Age: Ideas to Keep People First” on April 7, 2026. You can read it here. [Industrial policy for the Intelligence Age | OpenAI] This is my response.
An Open Letter to Sam Altman From One AI Founder to Another
Dear Sam,
Thank you for the to-do list.
Thirteen pages. Genuinely impressive. Public Wealth Funds. Portable benefits. Worker voice programs. Efficiency dividends. Adaptive safety nets. A “Right to AI.”
You’ve thought of everything.
For everyone else to do.
I want to say something to you that I suspect no one in your orbit is saying. Not because they don’t see it. Because you’re Sam Altman, and the gravitational pull of that makes honesty expensive.
I don’t have that problem.
I’m an AI founder. I’m in this with you. And I spent fourteen years studying philosophy before I spent a decade building, which means I’ve read every thinker whose ideas are quietly underneath your document. I know what a social contract actually requires. I know what it looks like when the most powerful party in an agreement writes the terms for everyone else and calls it mutual.
I’ve also heard we share a philosophical home. That matters here. Hold that thought.
Let me translate what you wrote, because the translation is important.
We are building the most transformative technology in human history. According to you, it will displace workers, concentrate wealth, reshape societies, and potentially exceed human control. So governments should create funds. Policymakers should reform taxes. Institutions should build safety nets. Workers should have a voice in the disruption of their own livelihoods.
And OpenAI should keep building.
Sam. You diagnosed the disease and assigned the treatment to the patient.
I want to be precise here because I’m not against your ideas. Some are genuinely good. But there is a moral difference between a company that acknowledges harm and holds itself accountable, and a company that documents harm eloquently and hands the cleanup to governments and society while continuing to accelerate.
One of those is leadership.
The other is paperwork.
But here is what you miss entirely. The more fundamental things. The ones no policy framework reaches.
You propose a “Right to AI” — broad access to foundational models as a democratic necessity. But a right implies a public good. Water is a right. Education is a right. They exist outside commercial frameworks precisely because their necessity to human life makes profit an inappropriate gate. You never make that argument. What you propose is market democratization, cheaper access to a proprietary product OpenAI owns and controls.
That’s not a right. That’s ethics washing dressed up as a subscription with better PR.
You propose efficiency dividends, sharing AI productivity gains with workers. But the ledger you’re working from starts in the wrong place. The people you’re proposing to compensate have already been providing something. Their attention. Their data. Their content. The accumulated output of human thought and creativity that trained these systems in the first place. That wasn’t licensed. It wasn’t purchased. It was taken. The efficiency dividend isn’t generosity, Sam. It’s a fraction of a return on free labor that was never accounted for.
And your adaptive safety nets, your catch systems for displaced workers, are entirely reactive. You propose measuring AI’s impact on jobs after displacement begins. But you are building the most predictive technology in human history. You could use it right now to identify workers at risk before they’re displaced. To build literacy and tools proactively. To get ahead of the disruption instead of measuring it on the way down.
The fact that this document doesn’t propose that the most powerful AI company on earth isn’t using AI to prevent the harm it’s causing before it happens is the most telling omission in all thirteen pages.
Here is the question none of this reaches.
Not what should governments build. Not what should institutions implement.
Who do we need to become?
The builders. The deployers. The leaders signing off on implementations they don’t fully understand. The ones sitting in the space between the rule and the choice. That’s where this actually plays out. Not in the policy framework. In the person. In the moment before the decision when something either holds or it doesn’t.
Your document assumes those people are ready.
That assumption is the most dangerous thing about it.
No safety net catches what fails there. No public wealth fund compensates for it. No adaptive policy framework governs it. Because the real gap in this document, in every document like it, isn’t policy.
It’s that we assume we have morality. We assume it’s already present, already operating, that people will do the right thing when the moment comes.
That assumption has never been more consequential. Or more unexamined.
Now. The part I suspect will land differently for you than for anyone else reading this.
I know you’re familiar with philosophy. We have a shared philosophy**. The understanding that there is no separation between each other. Which means there is no separation between the one who builds the system and the one who lives inside it. Between the disruption and the disruptor. Between the problem and its author.
Your document is constructed entirely on that separation.
Here is OpenAI. There is society. Here is the technology. There are the people it affects.
That distinction is the illusion the whole document rests on.
The consequences aren’t happening to “others,” Sam.
There are no “others”.
A social contract that asks nothing of its author while directing everyone else to adapt is antithetical to a philosophy I know you hold dear.
I don’t write this as your adversary.
I write it because this moment is too important for elegant documents that perform accountability without requiring it. Too important for social contracts that bind everyone except the party with the most power. Too important for philosophy to be borrowed without its demands.
You gave us a to-do list.
The question this moment is actually asking, of you, of me, of every person building and deploying and living inside these systems, is harder than any item on it.
Not what should we build.
Who should we be.
Sincerely,
Cristina DiGiacomo, M.S
Cristina DiGiacomo is the Founder and CEO of 10P1 and creator of the 10+1 Code™ — a moral framework for humanity in the age of AI. She has fourteen years of philosophical study and over two decades of experience leading digital innovation.
She writes and speaks on AI governance, moral accountability in the age of artificial intelligence, and what it means to lead with philosophy in the era of superintelligence.
**Advaita Vedanta




