Tuesday, April 7, 2026
spot_img
HomeOperationsIndustriesSam Altman Wrote a Social Contract. He Forgot to Sign It.

Sam Altman Wrote a Social Contract. He Forgot to Sign It.

OpenAI published “Industrial Policy for the Intelligence Age: Ideas to Keep People First” on April 7, 2026. You can read it here. [Industrial policy for the Intelligence Age | OpenAI] This is my response.

An Open Letter to Sam Altman From One AI Founder to Another

Dear Sam,

Thank you for the to-do list.

Thirteen pages. Genuinely impressive. Public Wealth Funds. Portable benefits. Worker voice programs. Efficiency dividends. Adaptive safety nets. A “Right to AI.”

You’ve thought of everything.

For everyone else to do.

I want to say something to you that I suspect no one in your orbit is saying. Not because they don’t see it. Because you’re Sam Altman, and the gravitational pull of that makes honesty expensive.

I don’t have that problem.

I’m an AI founder. I’m in this with you. And I spent fourteen years studying philosophy before I spent a decade building, which means I’ve read every thinker whose ideas are quietly underneath your document. I know what a social contract actually requires. I know what it looks like when the most powerful party in an agreement writes the terms for everyone else and calls it mutual.

I’ve also heard we share a philosophical home. That matters here. Hold that thought.

Let me translate what you wrote, because the translation is important.

We are building the most transformative technology in human history. According to you, it will displace workers, concentrate wealth, reshape societies, and potentially exceed human control. So governments should create funds. Policymakers should reform taxes. Institutions should build safety nets. Workers should have a voice in the disruption of their own livelihoods.

And OpenAI should keep building.

Sam. You diagnosed the disease and assigned the treatment to the patient.

I want to be precise here because I’m not against your ideas. Some are genuinely good. But there is a moral difference between a company that acknowledges harm and holds itself accountable, and a company that documents harm eloquently and hands the cleanup to governments and society while continuing to accelerate.

One of those is leadership.

The other is paperwork.

But here is what you miss entirely. The more fundamental things. The ones no policy framework reaches.

You propose a “Right to AI” — broad access to foundational models as a democratic necessity. But a right implies a public good. Water is a right. Education is a right. They exist outside commercial frameworks precisely because their necessity to human life makes profit an inappropriate gate. You never make that argument. What you propose is market democratization, cheaper access to a proprietary product OpenAI owns and controls.

That’s not a right. That’s ethics washing dressed up as a subscription with better PR.

You propose efficiency dividends, sharing AI productivity gains with workers. But the ledger you’re working from starts in the wrong place. The people you’re proposing to compensate have already been providing something. Their attention. Their data. Their content. The accumulated output of human thought and creativity that trained these systems in the first place. That wasn’t licensed. It wasn’t purchased. It was taken. The efficiency dividend isn’t generosity, Sam. It’s a fraction of a return on free labor that was never accounted for.

And your adaptive safety nets, your catch systems for displaced workers, are entirely reactive. You propose measuring AI’s impact on jobs after displacement begins. But you are building the most predictive technology in human history. You could use it right now to identify workers at risk before they’re displaced. To build literacy and tools proactively. To get ahead of the disruption instead of measuring it on the way down.

The fact that this document doesn’t propose that the most powerful AI company on earth isn’t using AI to prevent the harm it’s causing before it happens is the most telling omission in all thirteen pages.

Here is the question none of this reaches.

Not what should governments build. Not what should institutions implement.

Who do we need to become?

The builders. The deployers. The leaders signing off on implementations they don’t fully understand. The ones sitting in the space between the rule and the choice. That’s where this actually plays out. Not in the policy framework. In the person. In the moment before the decision when something either holds or it doesn’t.

Your document assumes those people are ready.

That assumption is the most dangerous thing about it.

No safety net catches what fails there. No public wealth fund compensates for it. No adaptive policy framework governs it. Because the real gap in this document, in every document like it, isn’t policy.

It’s that we assume we have morality. We assume it’s already present, already operating, that people will do the right thing when the moment comes.

That assumption has never been more consequential. Or more unexamined.

Now. The part I suspect will land differently for you than for anyone else reading this.

I know you’re familiar with philosophy. We have a shared philosophy**. The understanding that there is no separation between each other. Which means there is no separation between the one who builds the system and the one who lives inside it. Between the disruption and the disruptor. Between the problem and its author.

Your document is constructed entirely on that separation.

Here is OpenAI. There is society. Here is the technology. There are the people it affects.

That distinction is the illusion the whole document rests on.

The consequences aren’t happening to “others,” Sam.

There are no “others”.

A social contract that asks nothing of its author while directing everyone else to adapt is antithetical to a philosophy I know you hold dear.

I don’t write this as your adversary.

I write it because this moment is too important for elegant documents that perform accountability without requiring it. Too important for social contracts that bind everyone except the party with the most power. Too important for philosophy to be borrowed without its demands.

You gave us a to-do list.

The question this moment is actually asking, of you, of me, of every person building and deploying and living inside these systems, is harder than any item on it.

Not what should we build.

Who should we be.

Sincerely,

Cristina DiGiacomo, M.S

Cristina DiGiacomo is the Founder and CEO of 10P1 and creator of the 10+1 Code™ — a moral framework for humanity in the age of AI. She has fourteen years of philosophical study and over two decades of experience leading digital innovation.

She writes and speaks on AI governance, moral accountability in the age of artificial intelligence, and what it means to lead with philosophy in the era of superintelligence.

**Advaita Vedanta

Cristina DiGiacomo
Cristina DiGiacomohttps://cristinadigiacomo.com/
Cristina DiGiacomo is a philosopher of systems who builds ethical infrastructure for the age of AI. She is the founder and CEO of 10P1 Inc.and the creator of the 10+1 Commandments of Human–AI Co-Existence™, a decision-making tool designed to help leaders apply responsibility with clarity in high-stakes, high-ambiguity AI environments. The 10+1 provides a structured way for organizations to make ethical responsibility explicit as decisions are automated, scaled, and embedded into complex systems. Cristina brings more than 25 years of experience as an award-winning Interactive Strategist for major organizations including The New York Times, Citigroup, and R/GA. Throughout her career, she led large-scale digital initiatives and complex product launches, gaining firsthand insight into how systems shape human behavior—and how misaligned incentives can create structural risk and long-term harm. Her transition into philosophy was driven by what she observed inside modern institutions: moral confusion, short-term thinking, and a lack of language for consequence in decision-making. She earned a Master of Science in Organizational Change Management from The New School and has spent over a decade translating philosophical principles into practical tools for leadership, organizational design, and AI governance. Cristina is the author of the #1 bestselling book Wise Up! At Work(2020), which bridges timeless wisdom and practical action in the workplace. The book has been recognized by business leaders, HR professionals, and executive coaches as a resource for restoring clarity and integrity in environments where incentives often undermine both. Her current work sits at the intersection of Responsible AI, organizational ethics, and systems design. She helps senior leaders reduce risk by embedding moral clarity into decision processes—using tools that make expectations explicit, roles accountable, and tradeoffs visible before systems are deployed or scaled. Cristina’s category of work is known as Systems Ethics, focused on how ethical outcomes are produced by systems rather than individual intent alone. She is also the founder and Chief Philosophy Officer of the C-Suite Network AI Council, where she leads a council and mastermind for business leaders navigating the strategic, ethical, and organizational implications of artificial intelligence. A frequent speaker and podcast guest, Cristina is known for bringing a philosophical edge to high-level discussions on technology, power, and responsibility. Cristina has received multiple awards for her strategic and philosophical work, including two New York TimesPublisher’s Awards, a Cannes Cyber Lion Shortlist Award for work in Virtual Reality, the Industrial Philosopher of the Year award from the International Association of Top Professionals (IAOTP), and recognition fromMashable, COPA, and the Web Marketing Association.
RELATED ARTICLES
- Advertisment -spot_img

Most Popular