I'm finding that LLMs are doing a better job for searching for new things. If I have a question, instead of going to google or bing I'll goto chatGPT and ask some of that nature with some sources for further reading.
Never would I think that I would need to use AI to answer simple search and yet here we are because the sole purpose of a search engine doesn't really exist anymore.
The problem is, you can't trust ChatGPT to not lie to you.
And since generative AI is now being used all over the place, you just can't trust anything unless you know damn well that a human entered the info, and then that's a coin flip.
Except people are using LLM to generate web pages on something to get clicks. Which means LLM's are training off of information generated by other LLM's. It's an ouroboros of fake information.
If the data was gathered using the legal loophole of being for research purposes, I'm not sure anyone should have exclusive rights of it to begin with... Not sure if that answers the question