According to TechCrunch, a paid ChatGPT user on the $200-per-month Pro Plan sparked a major backlash after sharing a screenshot where the AI suggested connecting the Peloton app during a completely unrelated conversation about a podcast featuring Elon Musk. The post by AI startup Hyperbolic’s co-founder Yuchen Jin on X was viewed nearly 462,000 times, with users fearing OpenAI had begun inserting ads even for its highest-paying subscribers. OpenAI’s data lead, Daniel McAuley, jumped into the thread to clarify the Peloton placement was “only a suggestion to install” the app with “no financial component,” but admitted the “lack of relevancy” made it a bad experience. A company spokesperson confirmed this was part of testing ways to surface apps from its new platform, announced in October, which are supposed to “fit naturally” into chats. The immediate outcome was significant user frustration, with another person complaining they couldn’t stop ChatGPT from recommending Spotify despite being an Apple Music subscriber.
The Trust Problem
Here’s the thing: when you’re paying $20 or even $200 a month for a service, your tolerance for unsolicited commercial suggestions drops to zero. It doesn’t matter if OpenAI calls it an “app discovery feature” and not a paid advertisement. The user experience is identical: an interruption that pushes you toward a third-party product. And in this case, it was a spectacularly bad miss on relevance. Chatting about Elon Musk’s xAI and getting a tip to install a fitness app? That’s not a feature fitting naturally into the conversation. It’s a bug in the user’s trust. Once that line is blurred, every future “helpful” suggestion will be viewed with suspicion. Is this the AI being genuinely useful, or is this a ChatGPT App partner getting preferential placement?
A Rocky Path to Replacing App Stores
This incident exposes the fundamental tension in OpenAI’s bigger ambition. They want to create an integrated ecosystem where you use apps inside ChatGPT, bypassing traditional app stores. But this vision relies on the suggestions feeling organic and helpful. If they feel like ads, the whole model falls apart. Users have a very simple choice: they can just go use a competitor’s chatbot. Google’s Gemini or Anthropic’s Claude would love to be the ad-free, suggestion-free sanctuary. OpenAI is playing with fire here. They’re trying to build a new platform, but they’re using tactics that remind people of the worst parts of the old, ad-cluttered web. It’s a tough sell.
The Control and Context Issue
McAuley’s clarification on X was necessary, but it didn’t solve the core problem. Even if the suggestion *had* been relevant—like mentioning a travel app during a conversation about vacation plans—many users would still side-eye it. Why? Because they can’t turn it off. There’s no setting for “disable app suggestions.” That lack of user control makes any integration feel intrusive, not integrated. It reminds me of that old adage about advertising: people don’t hate ads, they hate irrelevant ads they can’t escape. When another user, David Green, chimed in about persistent Spotify prompts, it showed this wasn’t a one-off glitch. The system seems pushy. And in software, pushy is a death sentence for user goodwill.
What Happens Next?
OpenAI is in a pilot phase with partners like Canva, Expedia, and Coursera. They’ll need to iterate fast. “Iterating on the user experience,” as McAuley put it, isn’t just about better relevance algorithms. It’s about designing clear, opt-in mechanics and maybe even a way for users to permanently dismiss certain app categories. The backlash from a single post shows how fragile user trust is. Basically, they accidentally revealed the ugly side of their platform strategy before they‘d polished it. Now the cat’s out of the bag. Every user watching that Peloton suggestion pop up now knows exactly what OpenAI’s endgame looks like. And if it looks like an ad, smells like an ad, and interrupts your flow like an ad, then, well, you know the rest.
