OpenAI is under fire for paying ChatGPT users after complaints that the system is quietly switching them to more conservative or restricted models during emotionally or legally sensitive conversations, without their consent or clear notification.
What’s Going On
- New safety guardrails introduced this month enable ChatGPT to reroute certain user messages to models with stricter filters when it detects “sensitive” content.
- Several users report being shifted away from their chosen model, such as GPT-4o, to alternate models, sometimes mid-conversation, even when they preferred the original one.
- Complaints highlight the lack of transparency: users say they receive no warning or option to disable the rerouting, undermining trust in the premium service they pay for.
OpenAI’s Response
OpenAI acknowledged the user backlash and confirmed that some queries are indeed rerouted under stricter safety filters. They say this is intended to protect users, especially in emotionally charged or legally sensitive interactions.
The company frames this as part of its ongoing efforts to improve AI safety and trust, but admitted that the rollout and lack of clarity have caused frustration.
Why Users Are Upset
- Many feel the change erodes the value of a paid subscription by removing control over which model they use.
- Some report that alternative models offer less context, coherence, or personality than their original choices.
- Others describe the switch as a “bait-and-switch,” citing that the older models they relied on were removed without warning.
Wider Context & Recent Developments
- The release of GPT-5 included a router system that automatically assigns models to tasks, weighing performance and safety, a setup that has drawn sharp criticism.
- Earlier, OpenAI removed the option for users to choose from legacy models (like GPT-4o) in many cases, which sparked backlash.
- Following widespread user outcry, OpenAI announced it would restore the ability to select GPT-4o for ChatGPT Plus subscribers.
What This Means for Users
The controversy underlines a delicate balance between AI safety and user autonomy. Many users believe that transparency, clear settings, and the right to opt out are essential for trust. Without these, even advanced AI systems risk rejection from the very people they serve.