Some people have hypothesized that GPT-5 is actually about cost reduction and internal optimization for OpenAI, since there doesn't seem to be much of a leap forward, but another element that they seem to have focused on that'll probably make a huge difference to "normal" (non-tech) users is making precise and specifically worded prompts less necessary.
They've mentioned improvements in that aspects a few times now, and if it actually materializes, that would be a big leap forward for most users even if underneath GPT-4 was also technically able to do the same things if prompted just the right way.
yeah i think they shot themselves in the foot a bit here by creating the o series. the truth is that GPT-5 _is_ a huge step forward, for the "GPT-x" models. The current GPT-x model was basically still 4o, with 4.1 available in some capacity. GPT-5 vs GPT-4o looks like a massive upgrade.
But it's only an incremental improvement over the existing o line. So people feel like the improvement from the current OpenAI SoTA isn't there to justify a whole bump. They probably should have just called o1 GPT-5 last year.
You cannot even access the other models any more from the app. This is a huge bummer that is having me consider other brands. I don't trust gpt-5 yet, but I do trust 4.1 and most of my in-progress conversations are 4.1 based.
GPT-5 hasn't landed for me yet, but this has been my thought process too. This seems like a moment potentially equivalent to when Google got lowest-common-denominator-ed, when it stopped respecting your query keywords and doing "smart" things. If GPT-5 in practice turns out to be similarly optimized for lowest common denominator usage at the cost of precise controls over models, that'll be the thing that'll finally get me properly using Claude and Gemini and local models regularly.
It sounded like they were very careful to always mention that those improvements were for ChatGPT, so I'm very skeptical that they translate to the API versions of GPT-5.
They've mentioned improvements in that aspects a few times now, and if it actually materializes, that would be a big leap forward for most users even if underneath GPT-4 was also technically able to do the same things if prompted just the right way.