Advanced OpenAI models hallucinate more than older versions, internal report finds
Advanced OpenAI models hallucinate more than older versions, internal report finds

Advanced OpenAI models hallucinate more than older versions, internal report finds

Advanced OpenAI models hallucinate more than older versions, internal report finds
Advanced OpenAI models hallucinate more than older versions, internal report finds
Can confirm. o4 seems objectively far worse at coding than o3, which wasn't super great to begin with. It latches on to a hallucination before anything else and rides it until the wheels come off.
Yes, I was about to say the same thing until I saw your comment. I had a little bit of success learning a few tricks with o3 but trying to use o4 is a tremendous headache for coding.
There might be some utility in dialing it all back so it's more straight to what I need based more on package documentation than random redditor suggestion amalgamation.
Yeah, I think that workarounds with o3 is where we're at until Altman figures out that just saying the latest oX mini high is "great at coding" is bad marketing when it can't accomplish the task.
I don't quite understand why o3 for coding? Do you mean for code architecture or something? Like creating apps? Why not use a better model if its for coding?
That's exactly the problem.
However, o4 is actually "o4 mini-high" while o3 is now just o3 now. The full release, no "mini" or other limitations. At this point o3 in its full form is better than a limited o4.
But, none of that matters while Claude 3.7 exists.