LLMs are okay if you already know the answer, because they don't have a basis for truth or a bias for source; they prioritize frequency of co-occurrence of words in the training data. They can produce very convincing and completely made-up text, or propagate "common wisdom" that isn't actually correct.
Using ChatGPT is pretty much the same as trusting one of the random AI-generated blogs poisoning the search results--which other people in this thread are complaining about the uselessness of.