According to Futurism, OpenAI economics researcher Tom Cunningham has quit the company, alleging in an internal parting message that his team was veering away from real research to act like the company’s “propaganda arm.” The report, citing four sources at Wired, says at least two employees from the economic research team have left over frustrations that OpenAI is becoming “guarded” about publishing research that shows AI could harm the economy. This follows a memo from OpenAI’s chief strategy officer, Jason Kwon, who argued the company should “build solutions,” not just publish on “hard subjects.” The shift comes as OpenAI, now a for-profit juggernaut, is reportedly planning a trillion-dollar IPO and has entered into staggering financial deals, including a cloud contract with Microsoft that could be worth up to $250 billion.
The $100 Billion Reason for Silence
Here’s the thing: when you’re courting investments that could total $100 billion and have promised to pay a partner like Microsoft up to $250 billion, your priorities change. Drastically. OpenAI’s founding mission of open, altruistic research looks increasingly incompatible with its current reality as a commercial behemoth. You don’t publish a report suggesting your core product might wreck the job market when you’re trying to convince the world you’re worth a trillion dollars. It’s bad for business. So the research gets filtered, the messaging gets polished, and suddenly your economic team is putting out glowing reports about ChatGPT’s productivity boosts. It’s a classic case of incentive alignment, and right now, the incentives are screaming “don’t rock the boat.”
From Non-Profit to Propaganda Arm
Cunningham’s “propaganda arm” comment is brutal, but it cuts to the heart of OpenAI’s identity crisis. The company was literally founded on “open” principles. Now? Its models are closed-source, its structure is a for-profit capped by a non-profit in name only, and its research seems to be for PR as much as for progress. Look at the other departures. A former “Superalignment” safety researcher quit over product speed versus safety. Another ex-safety researcher, Steven Adler, has publicly branded the pace of development “terrifying.” When your own experts are walking out the door and sounding the alarm, it’s a sign the internal culture has fundamentally shifted. The goal isn’t understanding the truth about AI’s risks; it’s managing the narrative around them.
The Unsustainable Tension
So where does this leave us? Basically, with a company trying to serve two masters: its original, stated mission to benefit humanity, and its new, multi-hundred-billion-dollar reality as a market-dominant actor. Jason Kwon’s memo tries to square this circle by saying OpenAI, as the “leading actor,” must “take agency for the outcomes.” That’s a fancy way of saying, “We’ll decide what’s safe to talk about.” But that’s not research. That’s corporate communications. And as the stakes get higher with a potential IPO, this tension will only get worse. Can a company whose valuation depends on boundless optimism honestly investigate its product’s potential for harm? The exodus of researchers suggests the answer is a resounding “no.”
