I don't get it, what makes the output trustworthy? If it seems real, it's probably real? If it keeps hallucinating something, it must have some truth to it? Seems like the two main mindsets; you can tell by the way it is, and look it keeps saying this.
Given that multiple other commenters in the infosec.exchange thread have reproduced similar results, and right wingers tend to have bad security, and LLMs are pretty much impossible to fully control for now, it seems most likely that it's real.