Most machine learning is closer to universal function approximation via autodifferentiation. Backpropagation just lets you create numerical models with insane parameter dimensionality.
This is exactly how I explain the AI (ie what the current AI buzzword refers to) tob common folk.
And what that means in terms of use cases.
When you indiscriminately take human outputs (knowledge? opinions? excrements?) as an input, an average is just a shitty approximation of pleb opinion.
Well, lots of people blinded by hype here..
Obv it is not simply statistical machine, but imo it is something worse. Some approximation machinery that happen to work, but gobbles up energy in cost. Something only possible becauss we are not charging companies enough electricity costs, smh.
My biggest issue is that a lot of physical models for natural phenomena are being solved using deep learning, and I am not sure how that helps deepen understanding of the natural world. I am for DL solutions, but maybe the DL solutions would benefit from being explainable in some form. For example, it’s kinda old but I really like all the work around gradcam and its successors https://arxiv.org/abs/1610.02391
I wouldn't say it is statistics, statistics is much more precise in its calculation of uncertanties. AI depends more on calculus, or automated differentiation, which is also cool but not statistics.
Neural nets, including LLMs, have almost nothing to do with statistics. There are many different methods in Machine Learning. Many of them are applied statistics, but neural nets are not. If you have any ideas about how statistics are at the bottom of LLMs, you are probably thinking about some other ML technique. One that has nothing to do with LLMs.
I think saying machine learning is just statistics is a bit misleading. There’s not much statistics going on in deep learning. It’s mostly just “eh, this seems to work I dunno let’s keep doing it and see what happens”.