I see what you're not getting! You are confusing giving the odds with making a prediction and those are very different.
Let's go back to the coin flips, maybe it'll make things more clear.
I or Silver might point out there's a 75% chance anything besides two heads in a row happening (which is accurate.) If, as will happen 1/4 times, two heads in a row does happen, does that somehow mean the odds I gave were wrong?
I or Silver might point out there's a 75% chance anything besides two heads in a row happening (which is accurate.)
Is it?
Suppose I gave you two coins, which may or may not be weighted. You think they aren't, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.
We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?
In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.
But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but the internal workings of a one-shot model - including odds - are unfalsifiable. Same with Silver and the 2016 election.
You can't really falsify the claim “Clinton has a higher chance of winning”, at least the way Nate Silver models it. His model is based upon statistics, and he basically runs a bunch of simulations of the election. In more of these simulations, Clinton won, hence his claim. But we had exactly one actual election, and in the election, Trump won. Perhaps his model is just wrong, or perhaps the outcome matched one of the simulations in his model where Trump won. If we could somehow run the election hundreds of times (or observe what happened in hundreds of parallel universes) then maybe we could see if his model matched the outcome of a statistically significant number of election results. But nevertheless, Nate Silver had a model and statistics to back up his claim.
As for Michael Moore, I'm not sure exactly how he came up with his prediction, but I get the impression it was mostly a gut feeling based upon his observations of what was happening. Nevertheless, Michael Moore still could back up his statement by articulating why he was claiming that and the observations he had made.
Though one crucial difference is still the whole prediction thing. Michael Moore actually made a prediction of a Trump win. Whereas Nate Silver just stated that Clinton had a higher chance of winning, and once again that was not a prediction. So you're really comparing two different things here.
I guess it's up to you if want to trust it or not. He doesn't share all the details, but he (at least in the past) shared enough details on his blog that I felt pretty good that he knew what he was talking about it.
I will point out that he was one of the very few aggregators in 2016 that was saying "hey look, Trump has a very real chance of winning this". Which is why I find it so amusing when people say he got it wrong in 2016 when in actuality he was one of the few that was right. After 2008 there were a bunch of copycats out there trying to do similar things as Nate Silver, and many of them were saying things like 99.99% Clinton. If people are going to criticize, that's where I would direct it.
Even if he was the only one saying that, why are we giving him credit for it?
Maybe he was the first, but going forward anyone can follow his example and say things like, "Harris has a very real chance of winning. So does Trump. Also, Cruz and Allred both have very real chances of winning. So do Elizabeth Warren and her opponent, John Deaton".
Silver showed that if you hedge by replacing a testable prediction with a tautology, then you can avoid criticism regardless of the result. I don't think that is useful political analysis.