That's not true though. The models themselves are hella intensive to train. We already have open source programs to run LLMs at home, but they are limited to smaller open-weights models. Having a full ChatGPT model that can be run by any service provider or home server enthusiast would be a boon. It would certainly make my research more effective.
I know, I have used them. It's actually my job to do research with those kinds of models. They aren't nearly as powerful as current OpenAI's GPT-4o or their latest models.