I thought Anthropic's models were expensive but OpenAI seems to have found an infinite money glitch. I asked o4-mini an applied logic puzzle where I wasn't sure if the solution I had come up with was optimal but didn't really want to code a simulation to confirm it. I clearly communicated the parameters and after several minutes it came back with a smart sounding answer that subtly misunderstood the requirements. Several prompts later after many minutes of reasoning and some resistance, I had finally convinced it of the correct problem parameters and it told me that my solution was the best one after all.
If you just have the model keep subtly misunderstanding your question the user than has to burn through thousands of tokens arguing to just get the model to even understand the question and then tell you it can't think of anything better. Maybe OpenAI's reasoning models really are just tuned to have the longest possible conversation with a user to keep the user burning more tokens.
nostr:nevent1qvzqqqqqqypzpcpnjdyv5m9vjuyvmx8xx830fw4d2dxle6rs3qdkt2jh6v8lwff7qqsd7e8h5jeetw7l3w4s2qz0d9tntu7vpht35nhcde4n9a7rqhpnrqqse7m23
Login to reply
Replies (2)
It always felt like that OpenAI models intentionally cut responses short, and sometimes when you provide details, it will ask for a 2nd message to confirm you want to start. That and a whole load of bullshit
I wonder how long it will be before AI models are optimized in the same way that social media platforms are: to maximize engagement through whatever means possible.